[gambit-list] Re: Scheme/C help request for code needed for Math paper

Bill Richter richter at math.northwestern.edu
Mon Jan 31 00:56:11 EST 2005


Marc, with a new sorting algorithm, I'm going so much faster, I can't
even quantify it!  I've updated my web page code:
http://www.math.northwestern.edu/~richter/Richter-Curtis-algorithm.tar.gz

I would like to go even faster, as I'm hitting the wall at t = 71.
Look at ps -aux:

richter   7052 65.9 91.2 1110356 927876 pts/0 D   21:49  16:56 ./71-72zariski.out

I'm using 91.2% of the 1 GB memory, which is great, but only 65.9% of
the cpu, which means I'm paging out to disk, I think.  So unless you
Scheme wizards have another trick for me, I'll have to quit at t = 70.

I think there's a real lesson (which will take me 3 steps to get to)
here about Math, C & Scheme, which you taught me by writing:

   I'm willing to give you some help on optimizing your code, but you
   should first try Guillaume Germain's profiler to understand where
   the time is spent ...

1) A fast smart Scheme->C translator is great for human programmers,
because C code tends not to be human-readable, with all the
allocation/de-allocation of pointers, just to say the worst of C.

2) But there's still Scheme optimizations to make, which might be hard
on human-readability, and also general weird algorithm-hacks, such as
the numerical methods which you & Brad are experts at, and I want to
stay away from.  I remember that Brad posted on cls that the gmp folks
were real experts on numerical methods.  gmp is just multiplying
(large) numbers!  I'm really glad the gmp folks (and Brad) are good at
numerical methods, but I want to stay away from it myself.

3) But as you taught me by telling me to profile, there's also the
"real" mathematical algorithm that we're using!!!  By profiling, an
ordinary non-wizard schemer like me might easily see that the
algorithm needs changing at a "high level", i.e. not wizard-hacks.

Something more specific about my code: I have more or less long lists
of words, sorted alphabetically (i.e. in the left-lexicographical
order), into which I want to merge individual new words.  Let's say I
have a list of words from "aardvark" to "zzzlurp", and I want to
insert the word "mouse".  I was just going through the list in order,
comparing "mouse" against every word, till I got to a hole between say
"Morse" & "move" to insert "mouse".  To me, that looked OK, but
Guillaume's profiler showed I was spending about 50% of the time
checking which of two English or French words was higher in the
alphabetical order.  Finally it dawned on me that there was a simple
sorting fix I could make: store my "words" as trees.  I didn't realize
this because I'm a sorting expert (I'm anything but!), but because I
realized that's how we look up words in the dictionary, which has
little black letters engraved so we can go right to the "M" chapter to
find "mouse".  Why not keep going, have trees, instead of just
clumping by the 1st letter?

Back to Gambit Scheme->C speed, these numbers are fast to me:

;;  t = 0->55, 2.4 minutes
;;  t = 56->60, 11.2 minutes
;;  t = 61->65, 49.4 minutes, incl 31.0 minutes of garbage collection
;;  t = 65->70, 74.3 minutes, incl 47.0 minutes of garbage collection

With my new sorting algorithm, I'm even getting acceptable speed on
DrScheme:

;;  t = 0->55, 10.0 minutes
;;  t = 56->60, 43.5 minutes

When I wrote this code a few years ago, I couldn't get up to t = 60
running DrScheme job (actually byte-compiled mzc) all night.

I just profiled t = 59, and it took 35.4 minutes, showing the speed of
the Gambit C code, and the power of Guillaume's speed-hack: 

(declare
 (standard-bindings)
 (fixnum)
 (not safe)
 (inline)
 (inlining-limit 1000)
 (block))

t = 59 is important to me, because that's the 1st place where my
output diverges from my advisor's.  I'm sure I'm right, partly because
DrScheme replicated my answer.  I only found 4 divergences for t <= 70
between my output and his 20 yr old Stone-age Sun2 C output.

The hotspots are 2 functions in Lambda-defs.scm.  I spent 29% of the
time in Poly->Tree, which converts my polynomials (my list of
English/French words) into trees, and 36% of the time in merge-trees,
which merges 2 trees, both in Lambda-defs.scm.

That's still a lot, but it's a big an improvement over my old sorting
code, when I spent 63% of the time in Mono-left-lex, and 29% in
merge-1.  I got my percentages by my usual Emacs kbd macro kludge:

(+ 588 240 2081 2527 6764 540 590 3044 160 3133 456 2388 3542 2386
4865 378 417 2717 2468 287 3551 )
43122
(/ 43122.0 148005)
0.2913550217898044


(+ 5761 2007 666 1338 188 21539 662 800 814 743 762 5872 51 805 1238
438 1069 2794 620 1501 3126)
52794
(/  52794.0 148005)
0.3567041653998176

The profile html dir is included in my new tar.gz file.

-- 
Best,
Bill 


More information about the Gambit-list mailing list