Hi, I'm rather new to Gambit so I apologise if this is an incredibly ignorant question. I've been playing around with a few different Scheme implementations and some Project Euler solutions, and I've been quite surprised that many of them run much faster in MzScheme than Gambit. Looking into it further, it's only the *real* time that's smaller, whereas the CPU time is nearly the same. Here's an example from source code at:
http://real.metasyntax.net:2357/cvs/cvsweb.cgi/Programs/Euler/Scheme/092.scm
For reference, I'm using Gambit 4.4.2 on OpenBSD 4.5 x86 configured with --enable-single-host.
== GAMBIT 4.4.2 with sum-of-squares in Scheme using `gsc -link` and `gcc -O2`
real 1m34.795s user 0m46.520s sys 0m0.650s
== GAMBIT 4.4.2 with sum-of-squares in C using `gsc -link` and `gcc -O2`
real 0m46.421s user 0m19.380s sys 0m7.510s
(When using `gsc -flat` and loading into gsi I get pretty much exactly the same times.)
== CHICKEN 4.0.0 with sum-of-squares in Scheme using plain `csc`
real 0m41.441s user 0m40.840s sys 0m0.600s
== MZSCHEME 4.1.4 with sum-of-squares in Scheme using `mzscheme -f`
real 0m25.530s user 0m25.330s sys 0m0.190s
Looking at the times, I'm confused about why Gambit's wall-clock run time is nearly twice its on-CPU time. The other two Scheme implementations I tested only use about as much real time as total on-CPU time. I'm curious why Gambit uses so much more.
Thanks for any clarification,