Fixed, unary or binary arity functions are unlikely to be very expensive to me.
Considering you know those functions, I believe that you can get some
pretty good optimisations (inlining, special calling conventions,
etc).
Also, the fact that the compiler handles very well continuations and
that it does not transform code to CPS at some time are not that much
related.
Maybe the compiler itself is written in CPS (look for Marc's "using
closures for code generation"). (This method works great for
interpreters. I don't know how this works for compilers in fact…)
As for haskell −or any language− the problem is probably you. You
assume some equivalences exist and that efficiency should be therefore
mapped.
Pure functional lazy style code is bound to be slower unless deep
knowldegde of the semantics of the language, of the compiler and of
its optimisations are well known.
Haskell easily makes code aesthetic, but only *good* programmers make
haskell code efficient (eventhough the ghc compiler does really good
optimisations).
My 2¢.
P!
...thinking back on earlier experiments, I have seen just about the same
degree of slowdown when converting from applicative style to pure functional
lazy style (a la Haskell). So that leads me to conclude further that it is
in the production and subsequent handling of closures that costs the speed.
--
Français, English, 日本語, 한국어