On Sun, Apr 26, 2009 at 01:05:31PM -0400, Marc Feeley wrote:
On 26-Apr-09, at 12:09 PM, Taylor Venable wrote:
On Sun, Apr 26, 2009 at 10:10:39AM -0400, Marc Feeley wrote:
This sounds like more than one UNIX process is running. Can you try running "top" while you execute ./a.out ?
First of all, I'm on a system that looks like it has two CPUs (I have one of those Pentium 4 processors with hyper-threading).
Looking at `top` and comparing the three aforementioned Scheme implementations, I see some interesting results: whereas Chicken and MzScheme both stay in the process running state (onproc), Gambit frequently does a nanosleep which forces it to go into the ready-to-run state (run) before it can execute again. The Chicken and MzScheme impls will use the CPU that they're running on 100% but it's about 40% - 60% idle when running the Gambit version.
Strange indeed! Is your program calling thread-sleep! or process- status? I don't see why it would call nanosleep otherwise. Actually can you send me your program?
Sure. Attached is compiled.scm which uses C code for an intensive part; you can also comment this out and replace it with the pure-Scheme version. Both methods are shown in the timings below.
Please let me know if that solves the problem. It would be interesting to know if this problem also occurs on other UNIX flavors, or if it is specific to Linux (which I assume you are using). Which kernel is this on?
UNDEF-ing that definitely improved things. Here are some new timings:
When I do `gsc -link` and compiled with `gcc -O2` I get:
(sum-of-squares written in C)
real 0m26.889s user 0m19.580s sys 0m7.290s
When I do plain `gsc` and use `gsi 092.o1` I get:
(sum-of-squares written in C)
real 0m25.510s user 0m18.130s sys 0m7.360s
When I do plain `gsc` and use `gsi 092.o2` I get:
(using pure-Scheme sum-of-squares)
real 0m50.049s user 0m49.600s sys 0m0.430s
So putting the UNDEF in place makes it run onproc nearly constantly and reduces the wall-clock run time to the expected amounts. Now, another thing I wonder about is whether the system time should be that high with the code that uses a C implementation of sum-of-squares. Is that the result of translating data between Scheme and C types?
The OS I'm running is OpenBSD 4.5 on x86 (P4 3.0GHz w/ HT) running the bsd.mp kernel to utilize both "CPUs" and 1024MB RAM.
At the risk of saying too much at once, I also receive this message during the configure step:
checking sys/sysctl.h usability... no checking sys/sysctl.h presence... yes configure: WARNING: sys/sysctl.h: present but cannot be compiled configure: WARNING: sys/sysctl.h: check for missing prerequisite headers? configure: WARNING: sys/sysctl.h: see the Autoconf documentation configure: WARNING: sys/sysctl.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/sysctl.h: proceeding with the preprocessor's result configure: WARNING: sys/sysctl.h: in the future, the compiler will take precedence configure: WARNING: ## -------------------------------------- ## configure: WARNING: ## Report this to gambit@iro.umontreal.ca ## configure: WARNING: ## -------------------------------------- ## checking for sys/sysctl.h... yes
Not sure if that has any bearing on this problem whatsoever. At least I can say with confidence that not using nanosleep seems to have improved performance considerably in this situation. I have no other systems here at home to test this on, but I can test my Ubuntu x86_64 box at work tomorrow if you like.
Thanks,