I have almost 800 lines of Gambit code, at the top of my web page http://www.math.northwestern.edu/~richter/Richter-Curtis-algorithm.tar.gz and I hope someone can help me make the code run faster.
My advisor got roughly similar speed 20 years ago on stone-age Sun2s, but writing C code directly. Seems to me that a Scheme->C translator like Gambit (Version 4.0 beta 11) ought to perform comparably to writing the program in C. At least if the Scheme code is good! I can see two possible ways to improve the Scheme code:
1) slicker datatypes for some lists of lists of small integers
2) more sophisticated sorting
but I have no idea. I worked pretty hard to clean the code up, to make it readable, but I'm no speed wizard, like you and Brad.
My code computes some list for each nonnegative t, and it starts to bog down at t = 50. Here's the time output for t = 53:
7044096 ms real time 6974840 ms cpu time (6953810 user, 21030 system) 43801 collections accounting for 1622147 ms real time (1602770 user, 2510 system) 524299620112 bytes allocated 3641413 minor faults 10 major faults
That's 117.4 minutes of real time, almost 2 hours, including 27.0 minutes of garbage collection, and it looks to me like 524 gB of RAM (can that be right? the machine has 1 gB RAM and a 2.4 GHz cpu).
But 2 hours in about as long as I can sit down at the console of a public machine, and if I submit a job and log out, it hangs. It hangs also if I nice a job. These 2 facts seem very strange to me. So I've gotten as much output as I'm going to get, unless you can help me out, or I win the lottery and buy a machine like this for myself.
Here's something funny about nice/log-out: The job for t = 52 ran (un-niced) in 82 minutes, while I was sitting at the console of this public machine. But when I ran it nice-ed, it didn't finish in 7 hours, even with 3 of those hours sitting at the console.
Afficher les réponses par date
On Tue, 25 Jan 2005, Bill Richter wrote:
I have almost 800 lines of Gambit code, at the top of my web page http://www.math.northwestern.edu/~richter/Richter-Curtis-algorithm.tar.gz and I hope someone can help me make the code run faster.
I took this as an opportunity to test my profiler. You can see the resulting output at: http://www.iro.umontreal.ca/~germaing/tmp/richter/
It seems to show that the 'hotspots' are concentrated in adem.scm, plus at a few other places around your code. This might help you see where improvements are needed.
Also, you can speed up your code quite a bit simply by adding some declarations in your code. You can read about it in the documentation: http://www.iro.umontreal.ca/~gambit/doc/gambit-c_6.html#IDX153
Something like this [say, at the top of 'Curtis-algorithm.scm']:
(declare (standard-bindings) (fixnum) (not safe) (inline) (inlining-limit 1000) (block))
will give you some boost (I assume your numerals are all fixnums, but that might be wrong). Of course there are tradeoffs to doing this, so make sure you understand what those declarations means.
Finally, if you want to simplify your build process, you could use 'include' instead of 'load' in your files (except for that data file), and simply compile the code with:
% gsc -dynamic Curtis-algorithm.scm
then you try it with:
% gsi Curtis-algorithm
Hope this helps,
Guillaume
Thanks very much, Guillaume, & Marc! Your profiler sounds like a good idea, and I'll try it. I knew adem.scm, which does the sorting, was the hotspot, and I sorta said so in my README file, e.g.
"admissify takes a term and applies a zillion Adem relations to it in order to make it admissible. ... We're only going to admissify terms (a_1 ... a_s) with say s, a_i < 100. There's a lot of this going on, so it would be nice to do it *very quickly*, but no one admissify should take that long, on our fast machines."
"As far as speed goes, I'm convinced that all the work goes into calc-d, which calls Curtis-alg a reasonable small number of times, but each Curtis-alg can involve a ton of time. Curtis-alg must construct a large simplified sorted polynomial, and keep taking d (which calls admissify), and merging the (d a)'s into the large polynomial. That's what takes the time. My advisor talked about millions of terms here."
Also, you can speed up your code quite a bit simply by adding some declarations in your code. You can read about it in the documentation: http://www.iro.umontreal.ca/~gambit/doc/gambit-c_6.html#IDX153
Now this is just the Gambit Manual, which you can read in Emacs with info files, right? There's not enough info here for me:
These declarations are compatible with the semantics of Scheme. Typically used declarations that enhance performance, at the cost of violating the Scheme semantics, are: (standard-bindings), (block), (not safe) and (fixnum).
That's the only hit in the Manual for the string "(fixnum)". Ah, but I see there's lots of hits for just "fixnum", so I'll read up.
Something like this [say, at the top of 'Curtis-algorithm.scm']:
(declare (standard-bindings) (fixnum) (not safe) (inline) (inlining-limit 1000) (block))
will give you some boost (I assume your numerals are all fixnums, but that might be wrong). Of course there are tradeoffs to doing this, so make sure you understand what those declarations means.
How about I tell you, since I have manual troubles: I have lists of integers a with -1 <= x < 100, and they're not long lists. I call them monomials, or terms. Then I have loooong lists of terms, called polynomials.
you can speed up your code quite a bit simply by adding some declarations in your code....Something like this [say, at the top of 'Curtis-algorithm.scm']:
(declare (standard-bindings) (fixnum) (not safe) (inline) (inlining-limit 1000) (block))
Thanks, Guillaume!!! I'm getting 3 times faster!!! I did t = 0 to 51 in 41.1 minutes, in 2 stages:
;; t = 0--50 in 21.8 minutes. ;; t = 51, in 19.3 minutes
My previous mark was 134.9 minutes, in 4 stages:
;; t = 0--45, 489826 ms real time, 8.1 minutes ;; t = 46--49 1510112 ms real time, 25.2 minutes ;; t = 50, 2484842 ms real time, 41.4 minutes ;; t = 51 3613899 ms real time, 60.2 minutes
I have a suspicion that it goes faster to break it up into stages like this, although I think that would be a failure of garbage collection.
That's fabulous! Now I have a bunch of nitpicking technical stuff:
The Gambit manual was very clear about all these options, in the info node "Miscellaneous extensions", especially as they have an example similar to yours. The one thing not explained is where best to stick the `declare', but you advised me on that: at the very top. Then it applies to the whole file. I suppose that's explained.
Finally, if you want to simplify your build process, you could use 'include' instead of 'load' in your files (except for that data file), and simply compile the code with:
% gsc -dynamic Curtis-algorithm.scm
then you try it with:
% gsi Curtis-algorithm
Thanks, that was very helpful. Three things I don't understand:
1) why I shouldn't include the data file "BZ" as well? I changed the name to BZ.scm and did include it. I'm thinking it's related to:
2) I'm a little confused about the (block) declaration, which is about mutation, and BZ.scm is full of mutation statements like
(vector-set! (vector-ref B-vec 3) 7 '(((1 2 1) (2 3)) ((2 1 1) (4 1))))
The Gambit info node says:
In block compilation, the compiler assumes that global variables defined in the current file that are not mutated in the file will never be mutated.
If you include all the different files, then you really only have one file, right? Then you can always (block), since it's all just one file, doesn't matter who mutates. Isn't that what this means:
- special form: include FILE The FILE argument must be a string naming an existing file containing Scheme source code. The `include' special form splices the content of the specified source file. This form can only appear where a `define' form is acceptable.
But I don't think I have any global variables that are mutated at all! I certainly think (vector-set! ...) does mutation, but it's not mutation of any global variables. My interpretation is that only
(set! some-global-variable foo)
will run you into block problems.
3) You're not saying that
% gsc -dynamic Curtis-algorithm.scm % gsi Curtis-algorithm
is essentially equivalent to
% gcc -O2 -L. -I. Curtis-algorithm.c Curtis-algorithm_.c -lgambc % a.out
I didn't find the Gambit manual clear on this point.
Also, I couldn't run your profiler with gsc/gcc/a.out. Your profiler seems to need load rather than include, as in my file profile-Curtis.scm, based on your example.scm:
#!/rhome/richter/my-gambit/bin/gsi-script
(load "statprof.scm")
(define (main)
(profile-start!) (load "Curtis-algorithm.scm") (profile-stop!)
(write-profile-report "prof-CA"))
So I tried:
% gsc Curtis-algorithm statprof profile-Curtis
% gcc -O2 -L. -I. -o prof.out Curtis-algorithm.c statprof.c profile-Curtis.c profile-Curtis_.c -lgambc
and it did not create the directory "prof-CA". Neither did this:
% gsc -dynamic Curtis-algorithm statprof profile-Curtis
% gsi Curtis-algorithm statprof profile-Curtis
The only thing that worked was
% gsi Curtis-algorithm.scm statprof.scm profile-Curtis.scm
But I'd really like to run your profiler on compiled jobs. Otherwise it may not run fast enough to run at all. Your profiler will I imagine give different ratios for different values of t.
I used your profiler for Min_t = 0 & Max_t = 45, and got a smaller denominator 33982 than you did, and using my Emacs-kludge technique from to calculate that Mono-left-lex got 62% of the hits.
(+ 3982 874 652 2276 2544 5356 1094 1076 3362) 21216 (/ 21216.0 33982) 0.6243305279265493
But what's more interesting to me is that Poly-simplify, which I had made an accumulator version, only got 1% of the hits. Maybe that's because the accumulator is so efficient, but I bet doesn't it make much difference either way. Your profiler will tell me!
(+ 24 6 55 6 75 22 45 45 86 39 50) (/ 453.0 33982) 0.013330586781237126
Guillaume, I'm really getting almost *30* times faster!!! Your advice is better than I thought. I did t = 0 to 51 in 4.7 minutes. Yahoo!
you can speed up your code quite a bit simply by adding some declarations in your code....Something like this [say, at the top of 'Curtis-algorithm.scm']:
(declare (standard-bindings) (fixnum) (not safe) (inline) (inlining-limit 1000) (block))
Yesterday I goofed, and didn't put this quite at the top of the file. I put it below the line (include "adem.scm") where as your profiler showed, most of the action takes place. Now with your code at the top, I did t = 0 to 51 in 4.7 minutes, but my previous mark was 134.9 minutes, in 4 stages:
;; t = 0--45, 489826 ms real time, 8.1 minutes ;; t = 46--49 1510112 ms real time, 25.2 minutes ;; t = 50, 2484842 ms real time, 41.4 minutes ;; t = 51 3613899 ms real time, 60.2 minutes
I calculate (/ 134.9 4.7) 28.70212765957447
My goof graphically demonstrates the truth of the Gambit info node "Miscellaneous extensions":
- special form: declare DECLARATION... This form introduces declarations to be used by the compiler ... Declarations are lexically scoped in the same way as macros.
Yup! I said something else dumb yesterday: I whinged that your profiler wouldn't work with gsc/gcc. But of course that's true: your profiler looks at the hits of the Scheme program. If it was gcc, it would looks at the hits of the C program, which wouldn't mean anything to us, and with `gcc -O2', the C line numbers wouldn't mean anything at all! I remember that from running gdb for Stallman on Emacs years ago: if you want to gdb emacs, you can't make Emacs optimized.
I also tried rewriting my code to use s8vectors instead of lists for Monomials, and it is not running faster. At least I got the right answers up through (s, t) = (11, 50). Ah, I think I see the problem! I have multiple definitions:
(define Min_t 55) (define Max_t 55)
(define Min_t 0) (define Max_t 51)
I should've commented the first pair out. It's funny, but I've noticed this slows the a.out down by quite a bit. Dunno why. The homogeneous vectors look like really nice stuff, partly because of the extra functions we don't have for ordinary vectors:
- procedure: s8vector-append S8VECTOR... - procedure: subs8vector S8VECTOR START END
Perhaps that should say
- procedure: subs8vector S8VECTOR START (- END 1)
I'm willing to give you some help on optimizing your code, but you should first try Guillaume Germain's profiler to understand where the time is spent (Guillaume has profile your original code and the results are at: http://www.iro.umontreal.ca/~germaing/tmp/richter/). Quite cool stuff!
Marc
(Guillaume has profiled your original code and the results are at: http://www.iro.umontreal.ca/~germaing/tmp/richter/). Quite cool stuff!
Thanks, Marc, and Guillaume! I think I understand what your profiler does, and I looked at the link. I'll post this here in case there are other newbies out there who, like me, were reluctant to try the profiler, or maybe didn't see the point.
I was surprised to see that Mono-left-lex of Lambda-defs.scm gets 50% of the hits, topping the charts, followed by merge-1, which gets 22%. I guess that sounds right, but I goofed something up, you see. I knew the toughest part of the code is these 2 lines of Curtis-alg of Curtis-algorithm.scm:
(let ([X+db (merge-1 (d b) X Mono-left-lex)]) (Curtis-alg a X+db top-level-tags))
X is a huge sorted polynomial, and we have to merge this little sorted polynomial (d b) into it. So merge-1 is the hog, except that merge-1 keeps calling Mono-left-lex, which is the less-than? function. So it makes sense that Mono-left-lex would be the real hog.
But if I'd understood that, I would have moved Mono-left-lex to adem.scm, to put all the sorting-rats in the same file-trap.
Here's how I got 22%. It would be nice if I could paste better from the browser window into Emacs. Maybe that's impossible because of the cool colors. But I pasted all of merge-1 in, and got I pasted some of the browser into Emacs and got
31: [1374/45367]
(cond
32: [458/45367]
etc, and then I used an Emacs kbd macro to regexp-search for the numerators and add up them up:
(+ 1374 458 463 2115 1744 639 56 652 46 620 1730) 9897
(/ 9897.0 45367) 0.21815416492163908
That's 22%. Did I get that right? None of the other hits seem high:
[1567/45367] (Curtis-alg a X+db top-level-tags))
so about 3% of the time, and quicksort is a bit lower, and filter of drscheme.scm gets hits, but that's because of quicksort.
Using the same paste + kbd macro on Mono-left-lex, I get
(+ 3656 1040 951 2448 2792 4670 1613 1734 3710) 22614 (/ 22614.0 45367) 0.4984680494632662
or 50%. Thanks!
What value of Min_t & Max_t did you use? I'll try values myself.
Anyway, I guess I see the point of your profiler: by making code changes, I can see how the hits & colors change, so I see whether I'm making it faster or not. I've got questions right now about accumulators that I can attack this way.
By `quite cool stuff', I assume you mean the program itself, which was popular among my mathematicians 20 years ago, and there's one good book written on coding the Curtis algorithm, Tangora's AMS Memoir I cited in my README. None of this cool stuff is due to me.
But 20 years ago, there was a lot of good Math left unsettled, and their programs probably no longer run. Tangora's programs were in Snobol, e.g. My aim here (i.e. the Subject `Math paper') is to write a paper on how to do these calculations by hand up to say t = 30. The problem is that the paperwork gets completely out of hand, even though the Math is tractable. So the only way to do it is to write computer programs that calculate and print the answers, and then we mathematically check the answers. It turns out there's no real cheating: we just use the computer to generate conjectures and paperwork. I think this is an interesting mixture of Math and computers. But 2 things are important:
1) The computer code should be human-readable. Someone who understands the Curtis algorithm ought to be able to flip through the code and say, yeah I understand, and then they will feel confident to run the code, and play with it even. I've just written the dumbest vanilla version of the program. There's all kinds of cool tricks to play here, some described by Tangora.
2) The computer code should run reasonably fast compared to the stone-age code of 20 years ago. On DrScheme (from which I learned a lot of good programming tips), I was much slower than the stone-age. That's an gaffe that would sink my paper. This is just PR, I know.
-- Best, Bill