Dear Marc,
Thanks for all your clarifications.
I think I gathered from you that actually auto-forcing fundamentally is a very expensive problem to solve and that for this reason for me to solve my particular problem, I should minimize the amount of Gambit primitives that need to auto-force, to a minimum.
And indeed yes all with you that if such a simple reduction of the problem would not be possible, then dataflow analysis would be a fit.
For my final clarity on the implications of this problem, below I'd like to ask you additionally briefly about how forcing and the auto-forcing transformation actually work, and also check if forcing via protected virtual memory could be any good idea ever -
2017-09-19 3:28 GMT+08:00 Marc Feeley feeley@iro.umontreal.ca: [..]
Wait, what does |##apply-with-procedure-check| actually do, in what
situations is it invoked, is this run on all (f a1 a2 ...) with oper = f and args = (list a1 a2 ...) for any procedure call made anywhere, when compiling with (declare (safe))?
No… In the scope of a (declare (safe)) the generated C code will check if the “operator” position, the f here, is a procedure. A direct transfer of control to f is done when f is a procedure. The function ##apply-with-procedure-check is tail called by the runtime system when (##procedure? oper) is #f.
Ah I understand - right, so auto-forcing has zero overhead for operators in procedure calls. Great!
Will test and benchmark following your next clarification.
Thanks a lot!
I don’t understand why you are so concerned with this issue (forcing the operator position of a call)… The real overhead is auto-forcing data-structures… A good approach to minimize the overhead is a dataflow analysis or even BBV…
I made a preliminary test of --enable-auto-forcing's overhead and it suggested that --enable-auto-forcing out of the box incurs something like 400% overhead, on digest.scm , which is indeed a quite unfair example.
While not a pedantic approach on my behalf, I went into asking the question, how make auto-forcing faster, and came up with the idea that removing operator forcing could help speed things up.
Now you clarified that operator forcing actually has zero overhead - thanks.
I need to verify the 400% overhead figure, but, where is most of the overhead from auto-forcing caused?
Is it that (##force), which is done by macro-force-vars on every single value in the system, at every evaluation point ( https://github.com/gambit/gambit/blob/29103e6a29b8fbbf7d6fc772a344b814be3f1c...), has an inherent overhead in that an extra variable slot is added, and a type check, a comparison, and a conditional jump?
I can't find ##force's code anywhere, so it appears to me that it's a product of the compiler and is inlined. I presume ##force's pseudocode would look something like
(define-prim (##force value) (if (##promise? value)
(begin
(if (not (##promise-value-slot-set? value)) (##promise-value-slot-set! value (##promise-thunk-for-promise-code value)))
(##promise-value-slot value))
value))
, and its application in auto-forcing is a transformation something like
(define (language-primitive op .. arg .. ) ..logics..)
to
(define (language-primitive op .. arg .. ) (let ((arg (##force arg))) ..logics..))
.
I guess in this light, this alternative transformation would not be of any particular use:
(define (language-primitive op .. arg .. ) (set! arg (##force arg)) ..logics..)
An exotic idea would be to use protected virtual memory like described here https://medium.com/@MartinCracauer/generational-garbage-collection-write-bar... . I guess probably this would not work out at all, but I would like to ask you about it briefly anyhow -
For it to be useful globally, Brooks/forwarding pointers would need to be enabled in Gambit (so normally the forwarding pointer would be a self-reference, whereas for promises they would be located in the protected memory, which would spark a SIGSEGV, used as a trigger to run the promise code and update pointers).
For a limited use situation where the promise code's result type and size are pre-known, virtual memory for the promise value could be pre-allocated, and the SIGSEGV handler would function to spark the evaluation which would lead to filling out those protected memory addresses with real values.
However - I doubt that the SIGSEGV handler could be made easily to interact with the Scheme world though!
That is, the SIGSEGV handler would cause a trampoline jump to the promise code, and, at completion of the promise code, storing away its result and continuing at the Scheme code location that trigged the SIGSEGV.
I guess this would be difficult or impossible, because we don't know exactly what location in the C code trigged the SIGSEGV, and so the GVM would maybe not have operational integrity so that a trampoline in the Scheme world could take place -
I guess this is so far-out that it should not even be considered, what do you say?
So then, right, code analysis would help.
Also maybe the most simple speedup would be gained from reducing the forcing to need to take place only at a very limited subset of primitives.
Thanks, Adam