At 21:04 Uhr -0500 02.10.2006, Bill Richter wrote:
Excellent, Marc! I suppose I made a contribution to Gambit, through tenacity if not Scheme sense.
Next time, try to prove your tenacity by tracking down the problem yourself, please :)
I didn't even know that #( meant a constant.
All literal values are (or can be considered) constants.
Christian.
Afficher les réponses par date
Next time, try to prove your tenacity by tracking down the problem yourself, please :)
I didn't mean to hog the credit, Christian, and you did most of the work. Let's go back to our earlier correspondence::
I'm coding up a Sudoku technique that involves chains going from one of the 81 cells to another one, beginning & ending on different numbers. There's thousands, maybe tens of thousands of such chains. OK. There are rules to glue two chains together, and make new ones, which then go on this large list of chains. Etc. What's a good speed implementation for this?
Well, I don't know Sudoku and don't have the time to learn about it. So I can't follow what programming technique you're using and what you might be missing. It's only a guess when I suspect that you're building too much data at once in memory, and might profit from lazy evaluation (streams). I did suggest that to you in a different occasion once. You seem to think mathematically, so my guess is that you're programming in a rather descriptive way and do not care nor think about how memory is being used during processing; using lazy evaluation in the right places makes memory being used only on demand, without changing the program much.
Christian, I'm not having memory problems for my Sudoku program. I've never noticed that I was taking up more than 2% of the 2GB memory. My programs run for a long time in spite of the lack of memory problems. I'm trying to raise a general problem that has nothing to do with Sudoku, other than this: there are 81 cells with 9 possible values, and (* 9 81) = 729.
Now let's consider a relation on this set of 729 element, R subset 729 x 729 R happens to be symmetric, i.e. if (x, y) in R, then (y, x) in R also, but that's probably not important. Suppose we have a way of generating R from a smaller subset. So we start with R_1 subset 729 x 729 and we'll build R as the union of some R_n, for n = 1, 2, 3... We have a way of constructing R_{n+1} from R_n and R_1 like this. We're given a predicate Glue? : 729 x 729 -> Boolean and the rule is: If (x, y) in R_1, and (a, b) in R_n, then (x, b) in R_{n+1} iff (Glue? b x) => #t Let me write this recursion rule in Scheme, even using an undefined function, although of course this would be terribly slow code:
(define (R n) (if (= n 1) R_1 (bi-filter-map (lambda (a-b x-y) (let ([a (first a-b)] [b (second a-b)] [x (first x-y)] [y (second x-y)]) (and (Glue? b x) (list a y)))) R_1 (R (sub1 n)))))
OK. Then there's a rule saying when any element (x, y) in R_n does any good, given by a predicate
Eliminate? : R_n x 729 -> Boolean
If an elimination takes place, we quit, otherwise we build R_{n+1}. But I think the slow part is building R_{n+1}, because R_1 is a list of a few thousand pairs, and R can be over 50,000 pairs. n could be large, possibly as large as 81, but for my long-running programs, n doesn't seem to get above 7 or so. I would imagine good schemers had some tricks to do this much faster than my code.
Christian, back to your suggestion of using streams to reduce memory consumption on my other program, I can't imagine how this might work. What happened there is that I was building a large list, onto which I would `merge' smaller lists. The merging was destructive: if an element in the smaller list occurred in the large list, both elements disappear. This merge-smaller-lists-on process would continue until the large list was empty, or else somehow it was shown that the large list couldn't become empty. This business went a lot faster when I switched to trees, but I think the extra memory usage eventually did me in.
I can't see how streams would help. I don't think I'm just thinking mathematically. I think I need the entire large list, and that's why it's hogging so much memory. If I just needed a procedure to calculate elements of of the large list, and then calculate elements as they were needed, that would help minimize memory consumption...
Here's a dumb question, which may be related to gsc in beta 20 now taking -dynamic as the default: How do you now run gsc/gcc? What worked before is this:
% gsc Ultra; gcc -O2 -L. -I. Ultra.c Ultra_.c -lgambc; ./a.out
But now this file "Ultra_.c" is no longer being created, and instead I get the executable file "Ultra.o1". I'm not sure what do with it... Everything I've tried so far gives me segfaults. Christian's trick of compile-file works fine:
gsc (compile-file "Ultra.scm") (load "Ultra")
From the announce for Beta 20:
- The compiler, gsc, now produces dynamically loadable object files by default. The -dynamic flag is thus optional. To generate a link file (which used to be the default) you must use the -link option.
Guillaume
On 10/10/06, Bill Richter richter@math.northwestern.edu wrote:
Here's a dumb question, which may be related to gsc in beta 20 now taking -dynamic as the default: How do you now run gsc/gcc? What worked before is this:
% gsc Ultra; gcc -O2 -L. -I. Ultra.c Ultra_.c -lgambc; ./a.out
But now this file "Ultra_.c" is no longer being created, and instead I get the executable file "Ultra.o1". I'm not sure what do with it... Everything I've tried so far gives me segfaults. Christian's trick of compile-file works fine:
gsc (compile-file "Ultra.scm") (load "Ultra")
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Thanks, Guillaume, and it now works fine. Ought I to be using -dynamic, though? I just want my program to run fast, and this pretty fast: gsc -link Ultra; gcc -O2 -L. -I. Ultra.c Ultra_.c -lgambc; ./a.out > Ultra.out & But if -dynamic is faster, I'd sure like to switch!
\begin{politics} But I think it's a bad assumption for you guys to make that Gambit users are C wizards. I think you ought to try to also market your excellent product to dopes like me. Back to you:
- The compiler, gsc, now produces dynamically loadable object files by default. The -dynamic flag is thus optional. To generate a link file (which used to be the default) you must use the -link option.
I don't know what any of these words mean. I did look in the Gambit *info*, and saw:
gsc [-:RUNTIMEOPTION,...] [-i] [-f] [-v] [-prelude EXPRESSIONS] [-postlude EXPRESSIONS] [-dynamic] [-cc-options OPTIONS] [-ld-options OPTIONS] [-warnings] [-verbose] [-report] [-expansion] [-gvm] [-debug] [-track-scheme] [-o OUTPUT] [-c] [-link] [-flat] [-l BASE] [[-] [-e EXPRESSIONS] [FILE]]...
It didn't occur to me that `link' was the opposite of `dynamic'. I looked for `static'. Perhaps I should've said I configure-ed by:
./configure --enable-single-host --prefix=/rhome/richter/Gambit --enable-shared --enable-gcc-opts \end{politics}
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10-Oct-06, at 9:42 PM, Bill Richter wrote:
Thanks, Guillaume, and it now works fine. Ought I to be using -dynamic, though? I just want my program to run fast, and this pretty fast: gsc -link Ultra; gcc -O2 -L. -I. Ultra.c Ultra_.c -lgambc; ./a.out
Ultra.out &
But if -dynamic is faster, I'd sure like to switch!
\begin{politics} But I think it's a bad assumption for you guys to make that Gambit users are C wizards. I think you ought to try to also market your excellent product to dopes like me.
But that's exactly why the -dynamic flag has become the default. I think it is rare to want to produce an a.out, and when that's the case the "gsc -link ..." invocation is usually buried in a makefile so there is no need to make that case compact.
Instead of calling GCC yourself, with all the complexity of the compile options, link flags and include directories, simply generate a dynamically loadable file with "gsc Ultra". Then you simply run it with "gsi Ultra". In other words, instead of
gsc -link Ultra gcc -O2 -L. -I. Ultra.c Ultra_.c -lgambc ./a.out > Ultra.out
You simply do
gsc Ultra gsi Ultra > Ultra.out
That's hard to beat for simplicity, and you may get better performance because the best set of GCC compiler options will be used.
Marc
[gsc/gsi is] hard to beat for simplicity, and you may get better performance because the best set of GCC compiler options will be used.
Thanks, Marc!!! Yeah, I'm the last guy who needs to be thinking about GCC options myself. OK, I did some crude benchmarking:
% date; gsc -link Ultra; gcc -O2 -L. -I. Ultra.c Ultra_.c -lgambc; ./a.out > Ultra.out; date 13:46:34 13:49:13 % date; gsc Ultra; gsi Ultra > Ultra.out; date 13:50:36 13:53:12 % date; gsc -link U19; gcc -O2 -L. -I. U19.c U19_.c -lgambc; \ :-( ./a.out > U19.out; date 13:56:18 14:11:29 % date; gsc U19; gsi U19 > U19.out1; date 14:11:51 14:26:39
Both jobs ran in about the same time, about 3 minutes & 15 minutes resp. For the second gsc/gsi, which is technically shorter, someone logged into the machine & started running firefox. So I'd like to see what happens on the jobs that run 6 hours.
Marc, here's a medium level benchmark that shows your gsc/gsi way is superior to me using gcc directly, as 64.6 minutes beats 82 minutes:
% gsc U16; gsi U16 > U16.out4 & 3880182 ms real time 3876132 ms cpu time (3874121 user, 2011 system) 42223 collections accounting for 33745 ms real time (33617 user, 171 system)
% gsc -link U16; gcc -O2 -L. -I. U16.c U16_.c -lgambc; ./a.out > U16.out5 &
4925277 ms real time 4924971 ms cpu time (4923894 user, 1077 system) 43508 collections accounting for 34017 ms real time (34065 user, 155 system)
(/ 3880.0 60) 64.66666666666667
(/ 4925.0 60) 82.08333333333333