Now that truly concurrent threading is working fairly well I decided to benchmark Gambit against Python for a simple threaded program (threaded Fibonacci with a thread granularity of roughly 50 microseconds creating 30,000 threads). I was happy to see that Gambit performs well. Here are the timings:
% time gsi -:p4 tfib.scm
real 0m0.355s
user 0m1.234s
sys 0m0.041s
% time python3 tfib.py
real 0m3.965s
user 0m3.326s
sys 0m1.535s
On 4 processors Gambit has a “user” time that is about 4 times the “real” time, and the system time is almost nil.
But wait a second… the Python system time is huge and the user and real times are roughly the same… after a little bit of research I just recalled the GIL (Global Interpreter Lock) that effectively serializes the execution of the interpreter so only one thread is active at any point in time (when in the interpreter). I can’t believe how such a crapily implemented language can be so popular…
Any suggestions for a popular and efficient threaded language to compare to?
Marc