Dear Gambitizers,
I have programmed a simulation, and the list has successfully helped me with a few prior problems. Those are taken care of, but now I have a much more basic problem: I'm using up memory, to the point where my kernel (named after a certain Finn)
# uname -srmp Linux 2.6.26-custom i686 Intel(R) Core(TM)2 Quad CPU Q6700 @ 2.66GHz
kills the program. Of course I had the common misconception that this would never happen in a Lisp-like language and on and on: my real concern is that there is some basic programming technique that I am missing. Basically, is something failing to get garbage-collected?
I will discuss a specific problem below, but I ask the kind generosity of someone who could examine the code (or show me how to better examine it), and let me know if there are any major red-flags, or ways I could improve it.
My basic goal is that since I really enjoy programming in Scheme, I'd like to avoid having to do this in FORTRAN on the big ol' supercomputer here. Yes, I am in school, but my adviser had never heard of Scheme before I mentioned it to her ;)
To stop the memory allocation problem, I attempted to install an exception handler:
(define (mem-handler exc) (if (or (heap-overflow-exception? exc) (stack-overflow-exception? exc)) (if (noncontinuable-exception? exc) (abort exc) exc) (with-exception-catcher error-handler (lambda () (raise exc)))))
And then run the main driving routine within this like so:
(with-exception-handler mem-handler (lambda () (gen-sim-data)))
Here's what happens when I run it:
chondestes: /home/joel/lisp/scm/agjones> nice -n +10 jonesim -:m1000000,dR- *** WARNING -- Variable "gsl-vector" used in module "genxic.o1" is undefined .9999107177669622 2000 50 16 2 1000. .5 0. 0. 0. 1e-4 1. .25 true .9999111243206196 2000 50 16 2 1000. .5 0. 0. 0. 1e-4 1. .25 false .9999110166041458 2000 50 16 2 1000. .5 0. 0. 0. 1e-4 1. .75 true .9999111775781226 2000 50 16 2 1000. .5 0. 0. 0. 1e-4 1. .75 false .9999108917731552 2000 50 16 2 1000. .5 0. 0. 0. .09 1. .25 true .9999104499002971 2000 50 16 2 1000. .5 0. 0. 0. .09 1. .25 false .9999107898981544 2000 50 16 2 1000. .5 0. 0. 0. .09 1. .75 true zsh: killed nice -n +10 jonesim -:m1000000,dR-
Thanks for any help you can offer,
Joel
Getting the code ================
The code is available for public download from svn at
http://chondestes.bio.unc.edu/svn/models/agjones
A simple "make" in the top directory will yield a single executable that reads "input.txt." The critical parameter is "N," the first variable in "input.txt." I have tested this at 10, 100, and 2000, and it only finishes successfully at N=10.
To run it you will need the loadable library "genxic.o1" from the package located at
http://chondestes.bio.unc.edu/svn/genxic/trunk
To build the library, do a "make" in the scm/ directory, then copy it to the top directory of agjones/.
Afficher les réponses par date
I'm not sure this is your specific problem, but it might lead you to a solution. My suggestion is...
avoid using with-exception-**handler** unless you know what you are doing
Instead use with-exception-**catcher** . When you use with-exception- handler, you can get into infinite loops (which gobble the memory in your heap) when your exception handler raises an exception, and this seems to be the case here.
Marc
On 22-Sep-08, at 12:26 PM, Joel J. Adamson wrote:
(define (mem-handler exc) (if (or (heap-overflow-exception? exc) (stack-overflow-exception? exc)) (if (noncontinuable-exception? exc) (abort exc) exc) (with-exception-catcher error-handler (lambda () (raise exc)))))
And then run the main driving routine within this like so:
(with-exception-handler mem-handler (lambda () (gen-sim-data)))
"Feeley" == Marc Feeley feeley@iro.umontreal.ca writes:
Feeley> avoid using with-exception-**handler** unless you know Feeley> what you are doing
Oops! I got them mixed up. What would I know if I knew what I was doing?
I made the change and still get the same crash.
Thanks, Joel
On Mon, Sep 22, 2008 at 8:26 PM, Joel J. Adamson adamsonj@email.unc.edu adamsonj@email.unc.edu wrote:
I'm using up memory, to the point where my kernel (named after a certain Finn) kills the program. Of course I had the common misconception that this would never happen in a Lisp-like language
You can write spaghetti code in any language. The same applies to memory leaks :)
concern is that there is some basic programming technique that I am missing. Basically, is something failing to get garbage-collected?
After skimming your code, I see nothing obvious; however, there are definitely some areas that deserve a closer look:
1) Your cartesian product code could potentially use *tons* of memory. It doesn't appear to be called from anywhere, but maybe I'mm reading your code too fast.
2) your use of call/cc in the fitness function appears gratuitous - but again I didn't deeply read the whole recursion pattern. The thing to remember about call/cc is that it potentially can duplicate large portions of the stack. 9 times out of 10 you will be better off writing your code to using explicit CPS, anyway. (Less filling! Tastes Great!)
Have you traced this program to see where it's allocating heavily?
david
Thanks for all the replies so far --- Gambit's user community is the best I've found for any Scheme implementation.
Update on my thinking about this problem: I think that if I am adding more iterations (which is all I'm doing by tweaking my parameters), the program should just take longer to run, but that's not what happens.
"DR" == David Rush kumoyuki@gmail.com writes:
DR> You can write spaghetti code in any language. The same applies DR> to memory leaks :)
Good point: my frustration was making me think "Maybe what they say about Common Lisp is true," but then I realized "whatever crappy coding I'm doing I can do just as crappy in Common Lisp, C, Python, even Perl --- I'm a multilingual programmer..."
DR> After skimming your code, I see nothing obvious; however, there DR> are definitely some areas that deserve a closer look:
DR> 1) Your cartesian product code could potentially use *tons* of DR> memory. It doesn't appear to be called from anywhere, but maybe DR> I'mm reading your code too fast.
The data structure produced by `Cartesian' just sits there and gets read --- it doesn't grow and it doesn't get passed to anything --- could it still be getting copied?
I could certainly find a better way to derive the parameters for each run at the beginning of each run instead, e.g., reading the data into a hash-table, or reading the file at each run. Which would be more efficient (in terms of not crashing my program)? Time is not really an issue, it's getting the program to finish.
DR> 2) your use of call/cc in the fitness function appears DR> gratuitous
The fitness function is not recursive in the current version, but I could certainly find a better way to exit. I will point out that this is the sort of thing that beginner texts say call/cc is perfect for --- the typical example being applying `*' to a list and use call/cc to exit when you hit a zero.
DR> The thing to remember about call/cc is that it potentially can DR> duplicate large portions of the stack. 9 times out of 10 you DR> will be better off writing your code to using explicit CPS, DR> anyway. (Less filling! Tastes Great!)
Hmmm...okay, then I'm getting more mixed messages about call/cc. I know I've read in more than one place that "any program written in CPS can be rewritten more efficiently using call-with-current-continuation..." I'm not blaming anybody: I want to know who to believe. I'll certainly believe a well-phrased argument for compelling use of CPS from the gambit-list.
DR> Have you traced this program to see where it's allocating DR> heavily?
What's the best way to do that? I've traced certain functions and they work the way that I expect. Is there way to trace the entire call stack, as insane as that might look?
Thanks a heap ;)
Joel
On Tue, Sep 23, 2008 at 2:25 PM, Joel J. Adamson adamsonj@email.unc.edu < adamsonj@email.unc.edu> wrote:
"DR" == David Rush kumoyuki@gmail.com writes:
DR> The thing to remember about call/cc is that it potentially can DR> duplicate large portions of the stack. 9 times out of 10 you DR> will be better off writing your code to using explicit CPS, DR> anyway. (Less filling! Tastes Great!)
Hmmm...okay, then I'm getting more mixed messages about call/cc. I know I've read in more than one place that "any program written in CPS can be rewritten more efficiently using call-with-current-continuation..."
Well, that statement requires a bit of context. If you're talking about transforming your whole program by hand into CPS form and then using an arbitrary continuation to escape from an inner loop, then call/cc is *way* more efficient in terms of your own productivity. I'd be very surprised if there is any implementation where using call/cc is as CPU-efficient as explicitly passing continuation functions which are called from tail-position in your code - which is what I think I remember seeing. And if you capture a continuation via call/cc and then recurse you start getting into the allocation issues.
Now IIRC, Gambit is actually pretty clever in it's call/cc stack management strategy, but because of the C calling convention there are still spots where you can cause chunks of stack to get copied. Marc has published a number of papers on this topic and I'm quite sure I am doing a ton of violence to his work.
The reasons for explicitly passing your own continuation functions are manifold:
1) they help ensure the typological correctness of your program. Yes this is Lisp and all, but it does mean that really smart compilers (e.g. Stalin) can better optimize your code
2) it will reduce the number of errors you make because you will start treating different cases differently instead of trying to cook up spurious data hacks to conflate cases into a single return type
3) you will never have to worry about call-with-values again because you have explicit control over the arity of all the continuations that matter
4) it's good practice in developing your awareness of tail-call sites - which helps to keep your recursions clean
My personal poster-child case for explicit CPS is the assoc function, which returns a pair? if the key is found in the a-list and #f if it is not. My standard prelude now includes a super-sized version of this:
(define (assoc-k tag a-list k-success k-fail) (if (null? a-list) (k-fail tag) (let* ((head (car a-list)) (rest (cdr a-list)) (head-tag (car head))) (if (equal? tag head-tag) (k-success head) (assoc-k tag rest k-success k-fail) ))))
which is great for about a zillion reasons (which are left as an exercise for the reader). But please note: This is not a fully CPS-transformed program; however it is a program where you pass in the explicit continuations of the function call.
DR> Have you traced this program to see where it's allocating DR> heavily?
What's the best way to do that?
Errr...print out data structures at key points of the program and see if they're bigger than you expect?
Thanks a heap ;)
Don't make me Pun-ish you.
david rush
"DR" == David Rush kumoyuki@gmail.com writes:
DR> And if you capture a continuation via call/cc and then recurse DR> you start getting into the allocation issues.
I've removed all instances of call/cc from the application code --- there may be some lingering in my genetics library that I'll check for.
I still get the memory killing behavior at large population sizes, even though I've reduced another parameter to 1 (thus reducing the number of iterations by 50 times). It worked at 1000, but when I increased "N" to 2000, my workstation entirely locked up after about half an hour. There must be something totally different going on, or like I said it would just take longer.
DR> 1) they help ensure the typological correctness of your DR> program. Yes this is Lisp and all, but it does mean that really DR> smart compilers (e.g. Stalin) can better optimize your code
On the issue of using different compilers: outside of using the Gambit-C FFI, how locked in to Gambit-C am I? How many of Gambit's non-standard features are implemented in other compilers (e.g., Bigloo, Stalin, Larceny)? Does anybody use a different compiler and if so, what's your porting strategy?
DR> Have you traced this program to see where it's allocating DR> heavily? >> >> What's the best way to do that?
DR> Errr...print out data structures at key points of the program DR> and see if they're bigger than you expect?
Okay: I thought you were talking about a specific debugging procedure (duh).
>> Thanks a heap ;)
DR> Don't make me Pun-ish you.
Thanks a stack, Joel
I don't have the time to dig into your code. Just to make sure:
- be sure to compile the program; the Gambit interpreter does not analyze the lifetime of lexical bindings, and thus retains memory longer than necessary, and thus possibly longer than you anticipated. So make sure you compile the program before concluding that it has a memory problem.
- if there is in fact a memory problem with the compiled program, then run it in the interpreter ;-). Or at least partially (you could compile the parts of the programs you don't want to analyze at the moment). Because then you can easily step through it and find out what happens. (For debugging, also make sure you know how to use |generate-proper-tail-calls|.)
- find out what the reason of the out of memory situation could be: is (a) your problem maybe just asking for more memory than you've got?, or (a2) your problem doesn't necessarily ask for more memory than you've got in principle, but the way you're evaluating it (e.g. calculate to eagerly) requires too much memory at once, or (b) you've got an error in the program which leads to an infinite loop allocating memory, or (c) your program is holding on to memory that it doesn't need anymore?
For (c) check your assumptions about the lifetime of memory: particularly, be aware that structs (as defined using define-structure or define-type), or in fact any data structure like cons cells and vectors, will hold on to every location in them, even to those you'll never use again (this is unlike lexical bindings, which, as I've told above, will be analyzed by the compiler and only live as long as the program will possibly refer to them). If in doubt, copy the relevant data to new datastructures (or, if you want to go imperative, delete places in the structure by setting them to #f or (void) or whatever).
For (a2) using delay and force can help; but check my post at https://webmail.iro.umontreal.ca/pipermail/gambit-list/2007-May/001435.html, i.e. the current promises actually do retain memory longer than they have to ;), this is an example for which my hint above for problem (c) applies. So you might actually want to use the implementation of delay and force given in that mail.
Note that when heeding these precautions I could never get Gambit to leak memory in compiled programs. (So my expectation is that (unless some other implementation implement structures or vectors as dissectable entities, which I doubt) you won't get better behaviour by porting to other implementations.)
Christian.
"chj" == Christian Jaeger christian@pflanze.mine.nu writes:
chj> I don't have the time to dig into your code. Just to make sure: chj> - be sure to compile the program;
I have the same problem (at least the same result as far as the crash is concerned) with compiled or interpreted versions.
chj> (For debugging, also make sure you know how to use chj> |generate-proper-tail-calls|.)
I have used (generate-proper-tail-calls #f): is there more to it than that? Where can I read about it?
chj> - find out what the reason of the out of memory situation could chj> be: is (a) your problem maybe just asking for more memory than chj> you've got?, or (a2) your problem doesn't necessarily ask for chj> more memory than you've got in principle, but the way you're chj> evaluating it (e.g. calculate to eagerly) requires too much chj> memory at once, or (b) you've got an error in the program which chj> leads to an infinite loop allocating memory, or (c) your chj> program is holding on to memory that it doesn't need anymore?
I would find (c) to be the most plausible choice; can I rule out the other two by tweaking my parameters? If I run fewer iterations, I don't have the problem and the program completes successfully --- that means if I create fewer structs, 1000 versus 2000 --- I don't have the problem.
chj> For (c) check your assumptions about the lifetime of memory: chj> particularly, be aware that structs (as defined using chj> define-structure or define-type), or in fact any data structure chj> like cons cells and vectors, will hold on to every location in chj> them
Okay, so am I screwed here? Just kidding: if I bind them lexically, I would need to get the data out of them before exiting that closure (tail-calling my "data-collection" function) --- I can do that.
chj> (this is unlike lexical bindings, which, as I've told above, chj> will be analyzed by the compiler and only live as long as the chj> program will possibly refer to them).
Just to make sure I'm understanding: if I do everything with a particular data structure within a lexical closure, then I can use whichever sort of data-structure I want (struct, list, vector, etc).
As an example,
(define-structure female mating-status mated not-mated strategy age times-mated)
(define (do-struct i) (let ((struct (make-female #f '() '() (random-strategy) 1 0))) (do ((j 0 (+ 1 j))) ((= j 5) (print (female-age struct) "\n")) (female-age-set struct (+ j (female-age female))))))
This `female' structure is going to get gc'ed when do-struct exits?
chj> Note that when heeding these precautions I could never get chj> Gambit to leak memory in compiled programs.
Good to know.
Thanks, Joel
Joel J. Adamson adamsonj@email.unc.edu wrote:
I have used (generate-proper-tail-calls #f): is there more to it than that? Where can I read about it?
It's in the Gambit manual. You should just know that it does only have any effect in the interpreter and only for subsequently loaded code and that if you switch it to #f you almost *certainly* will leak memory but it can make it easier to see what a program is doing (especially when getting an error, since you can see the full call chain).
BTW if it's really the kernel killing your program (aka OOM killer, and not just Gambit unable to get more memory), then limit the virtual memory to a low enough value (ulimit -v). In many circumstances Gambit will just throw an out of memory exception then; if you're starting your program so that it runs the repl ("enters the debugger") when getting uncaught exceptions, you'll then be right at the spot where it has allocated too much memory.
chj> - find out what the reason of the out of memory situation could chj> be: is (a) your problem maybe just asking for more memory than chj> you've got?, or (a2) your problem doesn't necessarily ask for chj> more memory than you've got in principle, but the way you're chj> evaluating it (e.g. calculate to eagerly) requires too much chj> memory at once, or (b) you've got an error in the program which chj> leads to an infinite loop allocating memory, or (c) your chj> program is holding on to memory that it doesn't need anymore?
I would find (c) to be the most plausible choice; can I rule out the other two by tweaking my parameters? If I run fewer iterations, I don't have the problem and the program completes successfully --- that means if I create fewer structs, 1000 versus 2000 --- I don't have the problem.
Well, you could check whether the amount of needed memory is going linearly or quadratic with the input value, for example.
(a)/(a2) are a question of understanding the algorithm.
chj> For (c) check your assumptions about the lifetime of memory: chj> particularly, be aware that structs (as defined using chj> define-structure or define-type), or in fact any data structure chj> like cons cells and vectors, will hold on to every location in chj> them
Okay, so am I screwed here? Just kidding: if I bind them lexically, I would need to get the data out of them before exiting that closure (tail-calling my "data-collection" function) --- I can do that.
chj> (this is unlike lexical bindings, which, as I've told above, chj> will be analyzed by the compiler and only live as long as the chj> program will possibly refer to them).
Just to make sure I'm understanding: if I do everything with a particular data structure within a lexical closure, then I can use whichever sort of data-structure I want (struct, list, vector, etc).
Hm. This sentence isn't precise enough to say whether it's right.
What can be said is that if you do not group your values artificially (by using vectors/structures/lists) but keep and pass them as individual items in individual lexical bindings, then the compiler will help avoid memory retention issues because those items which the compiler sees that they won't be accessed anymore will be released as soon as possible (i.e. the generated code will not keep a reference to them), whereas if you group them yourself, you are also responsible for yourself to decide when you don't need all values from a group anymore and thus need to create a copy of only those values of the group that you still need.
As an example,
(define-structure female mating-status mated not-mated strategy age times-mated)
(define (do-struct i) (let ((struct (make-female #f '() '() (random-strategy) 1 0))) (do ((j 0 (+ 1 j))) ((= j 5) (print (female-age struct) "\n")) (female-age-set struct (+ j (female-age female))))))
This `female' structure is going to get gc'ed when do-struct exits?
Well, the above code is not a complete program (random-strategy, female-age-set, and female are all missing).
I guess you meant to use female-age-set!.
I'm not sure I can see the point you are trying to make.
If the female structure is not being referenced anymore it will be gc'd, of course. The retention problem is one where you are still holding a reference to a structure, because you are interested in one (or more) of the values in it, but not in other values, and those other values won't be gc'd.
Christian.
"David Rush" kumoyuki@gmail.com writes:
I'd be very surprised if there is any implementation where using call/cc is as CPU-efficient as explicitly passing continuation functions which are called from tail-position in your code
Chicken. Cheney on the MTA gives you call/cc essentially for free - it's just as fast as any other function call.
Gambit does not use Cheney on the MTA, mainly because it interferes with the implementation of unrestricted calls from Scheme to C and from C to Scheme. Gambit's implementation of continuations is done with a lazy copy of the captured continuation. The performance is quite good... on the two call/cc intensive Gambit benchmarks (ctak and fibc) Gambit outperforms Chicken. Note also that Gambit's thread system is based on continuations, so it is important for continuation operations to be efficient.
Marc
On 24-Sep-08, at 11:14 AM, Per Eckerdal wrote:
Chicken. Cheney on the MTA gives you call/cc essentially for free - it's just as fast as any other function call.
I was under the impression that Gambit also did this.. Am I wrong?
/Per _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Marc Feeley feeley@iro.umontreal.ca writes:
Gambit does not use Cheney on the MTA, mainly because it interferes with the implementation of unrestricted calls from Scheme to C and from C to Scheme. Gambit's implementation of continuations is done with a lazy copy of the captured continuation. The performance is quite good... on the two call/cc intensive Gambit benchmarks (ctak and fibc) Gambit outperforms Chicken.
Really? This page
http://www.ccs.neu.edu/home/will/Twobit/benchmarksFakeR6Linux.html
shows Chicken outperforming Gambit on ctak (it doesn't have fibc).
On Sep 25, 2008, at 10:21 PM, Alex Shinn wrote:
Marc Feeley feeley@iro.umontreal.ca writes:
Gambit does not use Cheney on the MTA, mainly because it interferes with the implementation of unrestricted calls from Scheme to C and from C to Scheme. Gambit's implementation of continuations is done with a lazy copy of the captured continuation. The performance is quite good... on the two call/cc intensive Gambit benchmarks (ctak and fibc) Gambit outperforms Chicken.
Really? This page
http://www.ccs.neu.edu/home/will/Twobit/benchmarksFakeR6Linux.html
shows Chicken outperforming Gambit on ctak (it doesn't have fibc).
It has fibc (gambit code runs faster than chicken code).
On 25-Sep-08, at 10:46 PM, Bradley Lucier wrote:
On Sep 25, 2008, at 10:21 PM, Alex Shinn wrote:
Marc Feeley feeley@iro.umontreal.ca writes:
Gambit does not use Cheney on the MTA, mainly because it interferes with the implementation of unrestricted calls from Scheme to C and from C to Scheme. Gambit's implementation of continuations is done with a lazy copy of the captured continuation. The performance is quite good... on the two call/cc intensive Gambit benchmarks (ctak and fibc) Gambit outperforms Chicken.
Really? This page
http://www.ccs.neu.edu/home/will/Twobit/benchmarksFakeR6Linux.html
shows Chicken outperforming Gambit on ctak (it doesn't have fibc).
It has fibc (gambit code runs faster than chicken code).
The latest set of benchmark results I have (Gambit-C 4.2.8 with latest patches, Chicken 3.3.0 SVN rev. 11106, gcc 4.0.1, Mac OS X 10.5.5, Mac Book Pro, 2 GB RAM, 2.0 GHz Dual Core Intel CPU) give:
"r5rs" mode: ctak: Chicken is 1.71 times slower than Gambit fibc: Chicken is 2.60 times slower than Gambit
"r6rs" mode: ctak: Chicken is 1.03 times slower than Gambit fibc: Chicken is 1.34 times slower than Gambit
The difference between the two modes is that in "r6rs" mode the (standard-bindings) declaration is used, i.e. the predefined variables are assumed to be immutable (so "+" is bound to the addition function, etc). Note that on SUN, Clinger's benchmarks (with different versions of the compilers) give a factor of 2 advantage to Gambit on these benchmarks so the processor architecture probably has an influence as well.
The slight variation with Clinger's results for "r6rs" mode is probably due to improvements/degradation in the Scheme and C compilers, and the different set of options given to the compilers to try to make them assume a similar setting (including the amount of RAM available for the heap).
The point I am trying to make is that in a Scheme to C compiler continuations can be implemented in other ways than Cheney on the MTA to get a system with good performance for call/cc. Whether one system is a few percent faster than the other on these benchmarks is quite possibly due to other factors unrelated to the implementation of continuations.
Another point I want to make is that Cheney on the MTA give you "free" call/cc only after paying a premium on other things, namely stack-like behaving function calls and tail-calls. Because typical code, and even realistic call/cc intensive code such as a thread system, do much more of these other things than calling call/cc, the overall performance of the system is suboptimal in general. With the latest set of benchmark results on 51 benchmark programs, in "r6rs" mode Chicken is 2.7 times slower than Gambit on average (geometric mean). That's the cost of "free" call/cc.
Marc
Hi,
Marc Feeley feeley@iro.umontreal.ca writes:
The point I am trying to make is that in a Scheme to C compiler continuations can be implemented in other ways than Cheney on the MTA to get a system with good performance for call/cc. Whether one system is a few percent faster than the other on these benchmarks is quite possibly due to other factors unrelated to the implementation of continuations.
Indeed, those benchmarks are both highly influenced by the speed of generic arithmetic, which Chicken is slow at. If you set the options for both implementations to use unsafe, fixnum-only arithmetic, the computation amounts to practically nothing, and all you're comparing is the speed of call/cc. In this case I find Chicken is roughly 1.4x faster for ctak, and 2x faster for fibc.
Chicken is a simple compiler with relatively few optimizations. The fact that it can nonetheless outperform Gambit (which is otherwise faster in general) on these benchmarks suggests that Cheney on the MTA gives you very fast continuations.
Another point I want to make is that Cheney on the MTA give you "free" call/cc only after paying a premium on other things, namely stack-like behaving function calls and tail-calls.
Sure, to be clear I'm not claiming that Cheney on the MTA is a superior architecture, just that it has fast continuations. Specifically, in answer to the original question, you can't get notably faster code with manual CPS than with call/cc in Chicken. But as you say, it comes with trade-offs, and I wouldn't be so rude as to recommend people use Chicken on the Gambit list :)
I do think that with a good optimizing compiler, a lot of the differences in strategies can be optimized away though. For example Chicken already contracts self tail-calls so that simple loops use goto - they're not "stack-like" - and many more optimizations can help close the gap.
On 26-Sep-08, at 10:41 AM, Alex Shinn wrote:
Hi,
Marc Feeley feeley@iro.umontreal.ca writes:
The point I am trying to make is that in a Scheme to C compiler continuations can be implemented in other ways than Cheney on the MTA to get a system with good performance for call/cc. Whether one system is a few percent faster than the other on these benchmarks is quite possibly due to other factors unrelated to the implementation of continuations.
Indeed, those benchmarks are both highly influenced by the speed of generic arithmetic, which Chicken is slow at. If you set the options for both implementations to use unsafe, fixnum-only arithmetic, the computation amounts to practically nothing, and all you're comparing is the speed of call/cc. In this case I find Chicken is roughly 1.4x faster for ctak, and 2x faster for fibc.
You are comparing Chicken to Chicken using different modes right? When Chicken and Gambit are benchmarked in "r6rs-fixflo-unsafe" mode (which combines declarations for standard-bindings, fixnum specific operations and unsafe execution (no type checks)) the results I get are:
ctak: Chicken is 1.03 times faster than Gambit fibc: Gambit is 1.01 times faster than Chicken
Given all the indeterminism in the processors (cache alignment, cache hits, etc) the execution times should be considered equal.
Chicken is a simple compiler with relatively few optimizations. The fact that it can nonetheless outperform Gambit (which is otherwise faster in general) on these benchmarks suggests that Cheney on the MTA gives you very fast continuations.
The conclusion from my benchmarks is quite different. Chicken does not outperform Gambit on these benchmarks. There is so little other stuff happening than call/cc in these benchmarks that it would appear that the performance of call/cc in Chicken and Gambit is essentially the same (to within a few percent).
Another point I want to make is that Cheney on the MTA give you "free" call/cc only after paying a premium on other things, namely stack- like behaving function calls and tail-calls.
Sure, to be clear I'm not claiming that Cheney on the MTA is a superior architecture, just that it has fast continuations. Specifically, in answer to the original question, you can't get notably faster code with manual CPS than with call/cc in Chicken. But as you say, it comes with trade-offs, and I wouldn't be so rude as to recommend people use Chicken on the Gambit list :)
I do think that with a good optimizing compiler, a lot of the differences in strategies can be optimized away though. For example Chicken already contracts self tail-calls so that simple loops use goto - they're not "stack-like" - and many more optimizations can help close the gap.
Only time will tell if all the optimizations required to match Gambit's current performance will be added to Chicken before the performance of Gambit is improved with new optimizations of its own!
But even if self tail-calls are handled better, stack-like tail-calls (in the Scheme source code) will suffer with Cheney on the MTA because you will not get stack-like behavior in the generated C code (at least in the general case). This will translate into additional GC pressure, a lower hit ratio for the caches, and a lower branch- prediction performance (note that the last point is shared with Gambit but not the first two). As you can see I am pessimistic about the performance that can be obtained with a Cheney on the MTA approach.
Marc
[I trimmed off the chicken-users list because I'm not interested in a pissing match between implementations :)]
Marc Feeley feeley@iro.umontreal.ca writes:
You are comparing Chicken to Chicken using different modes right?
Nope, Chicken to Gambit.
When Chicken and Gambit are benchmarked in "r6rs-fixflo-unsafe" mode (which combines declarations for standard-bindings, fixnum specific operations and unsafe execution (no type checks)) the results I get are:
ctak: Chicken is 1.03 times faster than Gambit fibc: Gambit is 1.01 times faster than Chicken
I'm using Chicken 3.4.0 with the -Ob optimization level, and Gambit 4.1.0 with
(declare (standard-bindings) (extended-bindings) (block) (not safe) (fixnum))
on an x86 Mac OS X machine. Running each benchmark 5 times (as separate processes), discarding the high and low and averaging the middle 3 times I get:
ctak fibc Chicken 0.023 0.011 Gambit 0.033 0.024
hence the 1.4x and 2x claims.
On 26-Sep-08, at 11:45 AM, Alex Shinn wrote:
[I trimmed off the chicken-users list because I'm not interested in a pissing match between implementations :)]
Marc Feeley feeley@iro.umontreal.ca writes:
You are comparing Chicken to Chicken using different modes right?
Nope, Chicken to Gambit.
When Chicken and Gambit are benchmarked in "r6rs-fixflo-unsafe" mode (which combines declarations for standard-bindings, fixnum specific operations and unsafe execution (no type checks)) the results I get are:
ctak: Chicken is 1.03 times faster than Gambit fibc: Gambit is 1.01 times faster than Chicken
I'm using Chicken 3.4.0 with the -Ob optimization level, and Gambit 4.1.0 with
(declare (standard-bindings) (extended-bindings) (block) (not safe) (fixnum))
on an x86 Mac OS X machine. Running each benchmark 5 times (as separate processes), discarding the high and low and averaging the middle 3 times I get:
ctak fibc
Chicken 0.023 0.011 Gambit 0.033 0.024
hence the 1.4x and 2x claims.
-- Alex
When I try the same thing on my 2 GHz MacBook Pro (with ctak repeated 100 times and fibc repeated 1000 times) I get:
ctak fibc Chicken 1.883s 4.551s Gambit 1.970s 3.118s
So, ctak: Chicken is 1.05 times faster than Gambit fibc: Gambit is 1.46 times faster than Chicken
I've attached the trace and source code below. The wild difference in performance you get is perhaps due to the old version of Gambit you are using. Can you please try this on your machine with Gambit v4.2.8?
Marc
% gsc -v v4.2.8 % gsc ctak.scm % gsi ctak (time (go 100)) 1970 ms real time 1961 ms cpu time (1838 user, 123 system) 1434 collections accounting for 324 ms real time (292 user, 31 system) 1221381344 bytes allocated no minor faults no major faults 7 % gsc fibc.scm % gsi fibc (time (go 1000)) 3118 ms real time 3103 ms cpu time (2945 user, 158 system) 1806 collections accounting for 411 ms real time (373 user, 38 system) 1538392000 bytes allocated no minor faults no major faults 2584 % csc -V
CHICKEN (c)2008 The Chicken Team (c)2000-2007 Felix L. Winkelmann Version 3.3.0 - macosx-unix-gnu-x86 [ manyargs dload ptables applyhook ] SVN rev. 11106 compiled 2008-09-22 on neo.local (Darwin)
Enter "chicken -help" for information on how to use it. % csc -b ctak.scm % ./ctak 1.883 seconds elapsed 0.044 seconds in (major) GC 0 mutations 71 minor GCs 91 major GCs 7 % csc -b fibc.scm % ./fibc 4.551 seconds elapsed 0.093 seconds in (major) GC 0 mutations 206 minor GCs 174 major GCs 2584 % cat ctak.scm (declare (standard-bindings) (extended-bindings) (block) (not safe) (fixnum))
(define (ctak x y z) (call-with-current-continuation (lambda (k) (ctak-aux k x y z))))
(define (ctak-aux k x y z) (if (not (< y x)) (k z) (call-with-current-continuation (lambda (k) (ctak-aux k (call-with-current-continuation (lambda (k) (ctak-aux k (- x 1) y z))) (call-with-current-continuation (lambda (k) (ctak-aux k (- y 1) z x))) (call-with-current-continuation (lambda (k) (ctak-aux k (- z 1) x y))))))))
(define (go n) (let loop ((n n) (r '())) (if (> n 0) (loop (- n 1) (ctak 18 12 6)) r)))
(pretty-print (time (go 100))) % cat fibc.scm (declare (standard-bindings) (extended-bindings) (block) (not safe) (fixnum))
(define (_1+ n) (+ n 1)) (define (_1- n) (- n 1))
(define (addc x y k) (if (zero? y) (k x) (addc (_1+ x) (_1- y) k)))
(define (fibc x c) (if (zero? x) (c 0) (if (zero? (_1- x)) (c 1) (addc (call-with-current-continuation (lambda (c) (fibc (_1- x) c))) (call-with-current-continuation (lambda (c) (fibc (_1- (_1- x)) c))) c))))
(define (go n) (let loop ((n n) (r '())) (if (> n 0) (loop (- n 1) (fibc 18 (lambda (n) n))) r)))
(pretty-print (time (go 1000)))
Marc Feeley feeley@iro.umontreal.ca writes:
I've attached the trace and source code below. The wild difference in performance you get is perhaps due to the old version of Gambit you are using. Can you please try this on your machine with Gambit v4.2.8?
Before I do that (because I need to close every application on my machine to have enough memory to compile Gambit), could you try using the same flags I used for Chicken? It's -Ob (or -benchmark-mode), not -b.
On Fri, Sep 26, 2008 at 5:32 PM, Marc Feeley feeley@iro.umontreal.ca wrote:
The conclusion from my benchmarks is quite different. Chicken does not outperform Gambit on these benchmarks. There is so little other stuff happening than call/cc in these benchmarks that it would appear that the performance of call/cc in Chicken and Gambit is essentially the same (to within a few percent).
Why not simply say: chicken and gambit are roughly in the same ballpark?
In the end, I have learned that nearly every performance assumption I made was wrong, and I'm a pretty experienced Scheme coder. Performing benchmarks like this and trying to extract any kind of practical relevance from the fact that program X on implementation Y with optimization settings Z takes 2% longer than on implentation Q. Are you sure you have built both implementations with maximal performance settings? Have you measured how much runtime-performance the memory patterns in this particular benchmark have caused? How do you know how your system configuration and hardware setup influences the outcome? Have you used optimal optimization settings for all implementations for this benchmark? Have you analyzed the compiler output to look for opportunities to tweak those settings for this particular benchmark? Do you know enough about chicken's internals and compiler options to chose the optimal combination (you couldn't, just as I couldn't for Gambit). It's all just assumptions.
The very reason Scheme and Lisp have so little acceptance and are not more widespread is that its implementors are so obsessed with performance (for hystorical raisins, of course), instead of making their implementation easier to work with, more practical and more useful.
Nevertheless I understand this obsession, its lots of fun, after all. :-)
So: CheneyOnTheMTA is an elegant concept that unifies fast first-class continuations, fast allocation, generational GC and not-too-difficult FFI in a relatively simple framework. Chicken's compiler is sufficient, but there are many opportunities to improve performance, some of which will be addressed, but which aren't really that important. A real module system (soon to come!) and 400+ libraries is what will make users happy, not 5% better performance.
I believe that CheneyOnTheMTA is more memory-efficient than other Lisp-implementation techniques. I also believe that the CPS-output of this scheme is more C-compiler friendly and easier to compile on stock machines. I believe that COTMTA (that's a nice abbreviation - I think I'll use that from now on) makes cross-module calls more efficient than trampoline-style, which is important for large code-bases that use separate compilation and dynamically loaded plugins. These are all assumption that may possibly be completely wrong.
Keep up the good work, Marc! Gambit is cool. But chicken is better. ;-)
cheers, felix
Hi Felix. I did not mean to drag you into this discussion. I know performance benchmarking is one of your buttons that is best left untouched!
All of this started with this message on the Gambit mailing list about the performance claim that call/cc in Chicken was "free" because of Cheney on the MTA and that Gambit used the same approach:
On 24-Sep-08, at 11:14 AM, Per Eckerdal wrote:
Chicken. Cheney on the MTA gives you call/cc essentially for free - it's just as fast as any other function call.
I was under the impression that Gambit also did this.. Am I wrong?
/Per
My response was that Gambit's continuations are based on a completely different approach which gives just as good performance, using the ctak and fibc benchmarks as simple evidence. A complete analysis of the two approaches would take a lot of effort, which is why I used these benchmarks as a quick-and-dirty way to evaluate the performance (it turns out that ctak is much better than fibc as a benchmark for call/cc because fibc does many other things than just call/cc, i.e. it measures other optimizations of the compiler).
Let me reiterate that I'm not trying to compare Gambit and Chicken as systems. If that was the case I would have much more to say and obviously would conclude that Gambit is better ;-)
Marc
On 27-Sep-08, at 9:03 AM, felix winkelmann wrote:
On Fri, Sep 26, 2008 at 5:32 PM, Marc Feeley feeley@iro.umontreal.ca wrote:
The conclusion from my benchmarks is quite different. Chicken does not outperform Gambit on these benchmarks. There is so little other stuff happening than call/cc in these benchmarks that it would appear that the performance of call/cc in Chicken and Gambit is essentially the same (to within a few percent).
Why not simply say: chicken and gambit are roughly in the same ballpark?
In the end, I have learned that nearly every performance assumption I made was wrong, and I'm a pretty experienced Scheme coder. Performing benchmarks like this and trying to extract any kind of practical relevance from the fact that program X on implementation Y with optimization settings Z takes 2% longer than on implentation Q. Are you sure you have built both implementations with maximal performance settings? Have you measured how much runtime- performance the memory patterns in this particular benchmark have caused? How do you know how your system configuration and hardware setup influences the outcome? Have you used optimal optimization settings for all implementations for this benchmark? Have you analyzed the compiler output to look for opportunities to tweak those settings for this particular benchmark? Do you know enough about chicken's internals and compiler options to chose the optimal combination (you couldn't, just as I couldn't for Gambit). It's all just assumptions.
The very reason Scheme and Lisp have so little acceptance and are not more widespread is that its implementors are so obsessed with performance (for hystorical raisins, of course), instead of making their implementation easier to work with, more practical and more useful.
Nevertheless I understand this obsession, its lots of fun, after all. :-)
So: CheneyOnTheMTA is an elegant concept that unifies fast first-class continuations, fast allocation, generational GC and not-too- difficult FFI in a relatively simple framework. Chicken's compiler is sufficient, but there are many opportunities to improve performance, some of which will be addressed, but which aren't really that important. A real module system (soon to come!) and 400+ libraries is what will make users happy, not 5% better performance.
I believe that CheneyOnTheMTA is more memory-efficient than other Lisp-implementation techniques. I also believe that the CPS-output of this scheme is more C-compiler friendly and easier to compile on stock machines. I believe that COTMTA (that's a nice abbreviation - I think I'll use that from now on) makes cross-module calls more efficient than trampoline-style, which is important for large code-bases that use separate compilation and dynamically loaded plugins. These are all assumption that may possibly be completely wrong.
Keep up the good work, Marc! Gambit is cool. But chicken is better. ;-)
cheers, felix
On Sat, Sep 27, 2008 at 4:20 PM, Marc Feeley feeley@iro.umontreal.ca wrote:
Hi Felix. I did not mean to drag you into this discussion. I know performance benchmarking is one of your buttons that is best left untouched!
I'm happy to have taken my little part in it.
All of this started with this message on the Gambit mailing list about the performance claim that call/cc in Chicken was "free" because of Cheney on the MTA and that Gambit used the same approach:
On 24-Sep-08, at 11:14 AM, Per Eckerdal wrote:
Chicken. Cheney on the MTA gives you call/cc essentially for free - it's just as fast as any other function call.
I was under the impression that Gambit also did this.. Am I wrong?
/Per
My response was that Gambit's continuations are based on a completely different approach which gives just as good performance, using the ctak and fibc benchmarks as simple evidence. A complete analysis of the two approaches would take a lot of effort, which is why I used these benchmarks as a quick-and-dirty way to evaluate the performance (it turns out that ctak is much better than fibc as a benchmark for call/cc because fibc does many other things than just call/cc, i.e. it measures other optimizations of the compiler).
Right, but aren't all benchmarks just quick-and-dirty? (hopefully quick, of course).
Note that in chicken the continuations that the Scheme programmer sees are not the real ones: they are wrapped in a closure and must perform extra work checking dynamic-wind thunks. There are some internal procedures (##sys#call-with-direct-continuation and ##sys#direct-return) which are really cost-free, as they use directly the implicit continuations that are created in the compilation processs.
Let me reiterate that I'm not trying to compare Gambit and Chicken as systems. If that was the case I would have much more to say and obviously would conclude that Gambit is better ;-)
Of course.
cheers, felix
On Fri, Sep 26, 2008 at 3:41 PM, Alex Shinn alexshinn@gmail.com wrote:
continuations. Specifically, in answer to the original question, you can't get notably faster code with manual CPS than with call/cc in Chicken.
But you can get notably faster code in many other implementations. Secondly, the use of *explicit CPS* is not the same issue as 'manual CPS'. Sometimes passing continuations explicitly is a good thing: it can clarify control-flow and promote type coherence. Both of these can be exploited to produce faster code as well.
Don't get me wrong - I am a big fan of call/cc. But it is a *big* gun and it is silly to use it for relatively simple things when a trivial code rearrangement will also produce code that is more sound, easier to reason about, and potentially faster.
david rush
"DR" == David Rush kumoyuki@gmail.com writes:
DR> Don't get me wrong - I am a big fan of call/cc. But it is a DR> *big* gun and it is silly to use it for relatively simple things DR> when a trivial code rearrangement will also produce code that is DR> more sound, easier to reason about, and potentially faster.
So can you give a non-trivial example? As I said, I used it because my problem was the sort of thing that intro textbooks (e.g., The Scheme Programming Language) say to use it for.
Joel
So can you give a non-trivial example? As I said, I used it because my problem was the sort of thing that intro textbooks (e.g., The Scheme Programming Language) say to use it for.
I've been playing around with using continuations a little. It's a fun little excercise and you can certainly use it to do some extraordinary things. It really is a big gun. A good example of a less trivial thing to do is continuation based web frameworks. Here are a couple of articles that nicely explain the concept:
http://www.double.co.nz/scheme/modal-web-server.html
Other classic examples of what you can do is to implement coroutines, Python-style generators, cooperative threads and exceptions. None of those are impossible to understand, so I encourage you to look into them if you're interested. I think continuations are a beautiful concept, they are so simple and so powerful. However, it is not always easy to get an overview of what's happening if you use them more than just locally in a function. I find the interaction with the dynamic environment especially difficult to grasp.
The relationship between call/cc and dynamic-wind is another interesting example related to more complex use of call/cc. In the appendix of http://mumble.net/~jar/pubs/scheme-of-things/june-92-meeting.ps there is an implementation of dynamic-wind. I had a great "aha" experience when I got how that code works, that code describes in a very practical way how call/cc makes the stack into a tree.
hope that helps.
/Per
On Sat, Sep 27, 2008 at 2:56 AM, Joel J. Adamson adamsonj@email.unc.edu < adamsonj@email.unc.edu> wrote:
"DR" == David Rush kumoyuki@gmail.com writes:
DR> Don't get me wrong - I am a big fan of call/cc. But it is a DR> *big* gun and it is silly to use it for relatively simple things DR> when a trivial code rearrangement will also produce code that is DR> more sound, easier to reason about, and potentially faster.
So can you give a non-trivial example?
Well I already did earlier in this thread. For a larger example, I'd have to point you to a rather larger program than would fit in the margin of this email :)
Some years ago I did quite a bit of (very unscientific) benchmarking w/rt different Scheme implementations and discovered the important of type-coherence to the performance of compiled code. The poster child for this is, of course, Stalin, but *every* Scheme compiler has a 'unsafe' mode (which disables type-checking to some degree), and using explicit CPS makes it easier to keep your code correct under those conditions.
As I said, I used it because my problem was the sort of thing that intro textbooks (e.g., The Scheme Programming Language) say to use it for.
Well, that's because it's an easy way to demonstrate the power of the construct. It is however poor software engineering as the use of call/cc frequently gets you involved in the law of unintended consequences. One correspondent actually pointed out to me that call/cc is closely akin to set! as a side-effecting operation. It certainly can have global consequences as it is a run-time implementation of a global compile-time transformation.
Which is all different from what I am suggesting. call/cc is - just like macros - something where most of the time there is a cleaner and more elegant way to achieve the same effect. Those few times where you do need it, there is literally no other other way.
For example, one place where I did use call/cc heavily was in transforming a yacc (well a scheme equivalent) generated parser from using a model where it controlled all the IO into one where it was event-driven. I needed a large-scale, global transformation of code I did not control and call/cc fit the bill perfectly. I also paid for it heavily with a program that ran rather slower and allocated way more heavily than it needed too - as I discovered when I rewrote the parser from scratch as an event-driven system.
My point is simply this - any tail-call is a continuation invocation. The fact that we can pass anonymous functions around with ease in Scheme means that we can create whatever continuation we need to be executed and pass it directly to the tail-call where it should be invoked. And the really great thing is that we need *not* invoke full call/cc when we do this. We get all this for free because of the requirement for full tail-call optimization. You just need to remain aware of when you are making tail-calls - and you should be doing that anyway.
david
On Sep 26, 2008, at 9:23 AM, Marc Feeley wrote:
Another point I want to make is that Cheney on the MTA give you "free" call/cc only after paying a premium on other things, namely stack-like behaving function calls and tail-calls. Because typical code, and even realistic call/cc intensive code such as a thread system, do much more of these other things than calling call/cc, the overall performance of the system is suboptimal in general. With the latest set of benchmark results on 51 benchmark programs, in "r6rs" mode Chicken is 2.7 times slower than Gambit on average (geometric mean). That's the cost of "free" call/cc.
Marc:
I'd just like to point out here that, as I'm sure you're aware, there are many different implementation decisions that are made in each of Chicken and Gambit and it is unlikely that any speed difference between the two can be attributed to any single design decision. The choice of "free" call/cc via Cheney on the MTA may, indeed, be an implementation choice that affects adversely nearly all other aspects of an implementation (which I doubt), but that could only be determined after quite a bit of analysis.
Brad
On Sep 26, 2008, at 11:29 AM, Bradley Lucier wrote:
I'd just like to point out here that, as I'm sure you're aware, there are many different implementation decisions that are made in each of Chicken and Gambit and it is unlikely that any speed difference between the two can be attributed to any single design decision.
That should be "general" speed difference between the two.
Brad
On 26-Sep-08, at 11:29 AM, Bradley Lucier wrote:
On Sep 26, 2008, at 9:23 AM, Marc Feeley wrote:
Another point I want to make is that Cheney on the MTA give you "free" call/cc only after paying a premium on other things, namely stack-like behaving function calls and tail-calls. Because typical code, and even realistic call/cc intensive code such as a thread system, do much more of these other things than calling call/cc, the overall performance of the system is suboptimal in general. With the latest set of benchmark results on 51 benchmark programs, in "r6rs" mode Chicken is 2.7 times slower than Gambit on average (geometric mean). That's the cost of "free" call/cc.
Marc:
I'd just like to point out here that, as I'm sure you're aware, there are many different implementation decisions that are made in each of Chicken and Gambit and it is unlikely that any speed difference between the two can be attributed to any single design decision. The choice of "free" call/cc via Cheney on the MTA may, indeed, be an implementation choice that affects adversely nearly all other aspects of an implementation (which I doubt), but that could only be determined after quite a bit of analysis.
Brad
Yes (of course). I did not mean that the cost of Cheney on the MTA is "2.7 times slower code on average". I meant that a part of that factor of 2.7 was due to Cheney on the MTA. I think it is a substantial part, but the exact amount would require a lot of analysis... enough to generate several interesting research papers!
Marc
Joel J. Adamson adamsonj@email.unc.edu wrote:
You didn't mention that you were using the FFI. This is allocating memory in C using gsl_matrix_alloc (and maybe other functions), right? And I don't see any release functions being declared. So, how many of those are you allocating?.. 8~)
Christian.
"chj" == Christian Jaeger christian@pflanze.mine.nu writes:
chj> Joel J. Adamson adamsonj@email.unc.edu wrote: >> http://chondestes.bio.unc.edu/svn/genxic/trunk >>
chj> You didn't mention that you were using the FFI. This is chj> allocating memory in C using gsl_matrix_alloc (and maybe other chj> functions), right? And I don't see any release functions being chj> declared. So, how many of those are you allocating?.. 8~)
No matrices for this code; I did comment out the release functions because I was getting a double-free error and this simulation uses only one of my gsl data structures, the random number generator (gsl_rng*) once in the "main.scm" module.
The code with the release functions is currently like this:
;; declare types
;; gsl-matrix pointer (c-define-type gsl-matrix* (pointer "gsl_matrix" gsl-matrix*)) ;; release function ;; "GENXIC_RELEASE_gsl_obj"))
;; gsl-vector pointer (c-define-type gsl-vector* (pointer "gsl_vector" gsl-vector*)) ;; "GENXIC_RELEASE_gsl_obj"))
(c-define-type gsl_rng* (pointer "gsl_rng" gsl_rng*)) ;; "GENXIC_RELEASE_gsl_obj"))
Joel
which version of Gambit? I've not been keeping up at the top of the release tree...
On Wed, Sep 24, 2008 at 8:24 PM, Joel J. Adamson adamsonj@email.unc.edu < adamsonj@email.unc.edu> wrote:
"chj" == Christian Jaeger christian@pflanze.mine.nu writes:
...
david
David Rush wrote:
which version of Gambit? I've not been keeping up at the top of the release tree...
Which version for what? I'm running v4.2.6.
Regarding the double free, Joel will need to either make sure he's not creating multiple foreign objects for the same C data structure, or using reference counts in or around the C data structures.
Christian.
On Wed, Sep 24, 2008 at 10:15 PM, Christian Jaeger < christian@pflanze.mine.nu> wrote:
David Rush wrote:
which version of Gambit? I've not been keeping up at the top of the release tree...
Which version for what? I'm running v4.2.6.
Actually, I meant Joel :)
david
"DR" == David Rush kumoyuki@gmail.com writes:
DR> On Wed, Sep 24, 2008 at 10:15 PM, Christian Jaeger < DR> christian@pflanze.mine.nu> wrote:
DR> David Rush wrote:
DR> which version of Gambit? I've not been keeping up at the DR> top of the release tree...
DR> Which version for what? I'm running v4.2.6.
DR> Actually, I meant Joel :)
I update every week.
Joel
On 22-Sep-08, at 3:26 PM, Joel J. Adamson wrote:
Getting the code
The code is available for public download from svn at
The following code is rather odd: (define (mem-handler exc) (if (or (heap-overflow-exception? exc) (stack-overflow-exception? exc)) (if (noncontinuable-exception? exc) (abort exc) exc) (with-exception-catcher error-handler (lambda () (raise exc)))))
Why do you test for noncontinuable-exceptions? Why do you call with- exception-catcher with a thunk that immediately raises an exception? Might as well just do (error-handler exc). What are you trying to accomplish? I suggest you use this instead, to determine if it is causing your problem:
(define (mem-handler exc) (display-exception exc) (exit 1))
Marc
"MF" == Marc Feeley feeley@iro.umontreal.ca writes:
MF> On 22-Sep-08, at 3:26 PM, Joel J. Adamson wrote:
>> Getting the code ================ >> >> The code is available for public download from svn at >> >> http://chondestes.bio.unc.edu/svn/models/agjones
MF> The following code is rather odd
This is exactly why I'm offering my code for review ;)
MF> What are you trying to accomplish?
My thinking was that the exception that causes the exception should be a noncontinuable exception and that I would want to enter the debugger at that point. I hope your question isn't asking me to defend my code ;) I'm learning this by trial-and-error and kind-review-by-people-who-know-way-more-than-me. I would much prefer that you tell me I'm dead wrong and doing it all wrong.
MF> I suggest you use this instead, to determine if it is causing MF> your problem:
MF> (define (mem-handler exc) (display-exception exc) (exit 1))
The problem happens regardless of whether the exception catcher is in effect.
Thanks, Joel
OK, time to call in the debugging cavalry... please recompile your Gambit like this:
% cp gsc/gsc non-debugged-gsc % ./configure CC="gcc -D___DEBUG_HOST_CHANGES" --enable-debug % make mostlyclean % make % cp non-debugged-gsc gsc/gsc # to avoid having a Gambit compiler with tracing
The combination of -D___DEBUG_HOST_CHANGES and --enable-debug will cause the runtime system to write a very detailed trace of the execution to the file "console" when a program linked with the runtime is run (including gsi/gsi).
Now recompile your program with gsc/gsc and execute your program (the important thing is to link with the new runtime in lib/libgambc.a, or to use the interpreter gsi/gsi to load your .o1 files). In a separate xterm do:
% cd your-working-directory % tail -f console
In this xterm you will see a detailed trace of what your program is doing. All the control transfers between Scheme procedures will be traced. Here's a sample output:
*** Entering ##write-char *** Entering ##write-substring (subprocedure 3) *** Entering ##kernel-handlers (subprocedure 2) *** Entering ##interrupt-handler *** Entering ##thread-heartbeat! *** Entering ##thread-check-devices! *** Entering ##os-condvar-select! *** Entering ##thread-check-devices! (subprocedure 1) *** Entering ##thread-heartbeat! (subprocedure 1) *** Entering ##thread-check-timeouts! *** Entering ##get-current-time! *** Entering ##thread-check-timeouts! (subprocedure 1) *** Entering ##thread-heartbeat! (subprocedure 2) *** Entering ##thread-yield!
This means that ##write-char was jumped to (probably a function call), and then ##write-substring was jumped to (probably a function return because it is a "subprocedure"), etc.
By looking at the tail of the trace you may get a better idea of what the Scheme code was doing when your problem occurs. Note that the extensive tracing slows down the program dramatically, so it may take a while for your program to reach the point where the problem occurs.
Happy debugging!
Marc
On 25-Sep-08, at 3:47 PM, Joel J. Adamson wrote:
"MF" == Marc Feeley feeley@iro.umontreal.ca writes:
MF> On 22-Sep-08, at 3:26 PM, Joel J. Adamson wrote:
Getting the code ================
The code is available for public download from svn at
MF> The following code is rather odd
This is exactly why I'm offering my code for review ;)
MF> What are you trying to accomplish?
My thinking was that the exception that causes the exception should be a noncontinuable exception and that I would want to enter the debugger at that point. I hope your question isn't asking me to defend my code ;) I'm learning this by trial-and-error and kind-review-by-people-who-know-way-more-than-me. I would much prefer that you tell me I'm dead wrong and doing it all wrong.
MF> I suggest you use this instead, to determine if it is causing MF> your problem:
MF> (define (mem-handler exc) (display-exception exc) (exit 1))
The problem happens regardless of whether the exception catcher is in effect.
Thanks, Joel -- Joel J. Adamson University of North Carolina at Chapel Hill CB #3280, Coker Hall Chapel Hill, NC 27599-3280
Before you reply to this email, please read http://www.unc.edu/~adamsonj/email-howto.html