I was wondering if anyone would be kind enough to explain why calling code compiled into a separate module is faster than calling code within the same module.
Consider the following 2 files.
;; file2.scm (define (calc i) (+ i 10))
;; file1.scm (include "file2.scm")
(time (let loop ((i 0)) (calc i) (if (< i 1000000) (loop (+ i 1)))))
Compiling "file1.scm" into an executable and running outputs the following: 90 ms real time 89 ms cpu time (88 user, 1 system) no collections no bytes allocated no minor faults no major faults
Now compile file2.scm into a module, called file2.o1, and change the include statement in file1 to `(load "file2")`. Compile file1 into an executable, run it and the following is printed: 67 ms real time 65 ms cpu time (65 user, 0 system) no collections no bytes allocated no minor faults no major faults
From my limited experience with Gambit's evaluation and compilation
mechanisms, I would expect the exact opposite. Using `gsc -expansion`, I can see the an include statement does splice in the code like I thought, but why then is it faster to call the same code when it is compiled into a separate module? I would have that it would incur a slight performance penalty for having to cross a module boundary.
Gambit v4.0.1, compiled with --enable-single-host and --enable-gcc-opts