...thinking back on earlier experiments, I have seen just about the same degree of slowdown when converting from applicative style to pure functional lazy style (a la Haskell). So that leads me to conclude further that it is in the production and subsequent handling of closures that costs the speed.

Dr. David McClain
Sr. VP, Embedded Systems
Asyrmatos Inc.
Boston & Tucson
phone:  520-529-2437
cell:  520-390-3995
web:  www.asyrmatos.com




On May 22, 2009, at 13:04, D.McClain wrote:

Bingo!

Dr. David McClain
Sr. VP, Embedded Systems
Asyrmatos Inc.
Boston & Tucson
phone:  520-529-2437
cell:  520-390-3995
web:  www.asyrmatos.com




On May 22, 2009, at 12:51, Taylor R Campbell wrote:

   Date: Fri, 22 May 2009 12:45:59 -0700
   From: "D.McClain" <dbm@asyrmatos.com>

   Okay, you have my attention... let's see that isomorphism in  
   practice.

I leave that as an exercise for the reader.

   I have seen my own measurements, and invariably they produce code
   that is 30% slower for CPS form than direct form. Perhaps the
   compilers producing the actual native code have been tuned to look
   for common human idioms and not CPS traits?

The way you say that suggests to me that you are using the *same*
compiler to compare a direct-style program with the same program
converted to continuation-passing style.  Unless the compiler is
extremely clever, it will probably generate worse code for the CPS
form of the program, for the reason I explained in my first message.

That's a very different question, however, from the question of how
the use of a CPS intermediate representation affects the code that a
compiler generates.