I just read an interesting paper entitiled "Towards a Portable and Mobile Scheme Interpreter", describing the interpreter called Mobit, and which can serialize closures, continuations, and new functions across a channel to cooperating Termite sessions. Is this work still ongoing / available ?
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 23, 2009, at 02:06, Adrien Piérard wrote:
Fixed, unary or binary arity functions are unlikely to be very expensive to me. Considering you know those functions, I believe that you can get some pretty good optimisations (inlining, special calling conventions, etc).
Also, the fact that the compiler handles very well continuations and that it does not transform code to CPS at some time are not that much related. Maybe the compiler itself is written in CPS (look for Marc's "using closures for code generation"). (This method works great for interpreters. I don't know how this works for compilers in fact…)
As for haskell −or any language− the problem is probably you. You assume some equivalences exist and that efficiency should be therefore mapped. Pure functional lazy style code is bound to be slower unless deep knowldegde of the semantics of the language, of the compiler and of its optimisations are well known. Haskell easily makes code aesthetic, but only *good* programmers make haskell code efficient (eventhough the ghc compiler does really good optimisations).
My 2¢.
P!
2009/5/23 D.McClain dbm@asyrmatos.com:
...thinking back on earlier experiments, I have seen just about the same degree of slowdown when converting from applicative style to pure functional lazy style (a la Haskell). So that leads me to conclude further that it is in the production and subsequent handling of closures that costs the speed.
-- Français, English, 日本語, 한국어