`gsc' supports -expansion, but it seems to show the basic expansion into normalized gambit code. Is there any way to show the fully CPS-transformed forms of the code?
Purely for research purposes.
- James
Afficher les réponses par date
Hallo,
On 5/22/09, Marc Feeley feeley@iro.umontreal.ca wrote:
On 21-May-09, at 4:54 PM, James Long wrote:
`gsc' supports -expansion, but it seems to show the basic expansion into normalized gambit code. Is there any way to show the fully CPS-transformed forms of the code?
Nope. Gambit does not transform code to CPS style.
Also more of a curiosity, why? Is CPS irremediably slower? I've been reading a lot about compiling lately...
Cheers,
I have speculated for some time now too, after experiments with CPS based compilers on a number of occasions, that the slowdown must be due to two things:
1. All functions are now forced to accommodate the continuation parameter, whereas before a large majority of functions where niladic or unary operations,
2. The creation of the continuation arguments requires the production of a closure, which is inherently somewhat expensive.
My own work has consistently shown a 30% slowdown, independent of actual language of implementation -- be it Scheme, Lisp, or OCaml.
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 22, 2009, at 06:51, Alex Queiroz wrote:
Hallo,
On 5/22/09, Marc Feeley feeley@iro.umontreal.ca wrote:
On 21-May-09, at 4:54 PM, James Long wrote:
`gsc' supports -expansion, but it seems to show the basic expansion into normalized gambit code. Is there any way to show the fully CPS-transformed forms of the code?
Nope. Gambit does not transform code to CPS style.
Also more of a curiosity, why? Is CPS irremediably slower? I've
been reading a lot about compiling lately...
Cheers,
-alex http://www.ventonegro.org/ _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On Fri, May 22, 2009 at 11:07 AM, D.McClain dbm@asyrmatos.com wrote:
I have speculated for some time now too, after experiments with CPS based compilers on a number of occasions, that the slowdown must be due to two things:
This makes sense. Somehow I became convinced that Gambit used CPS since continuations are so well-treated, but of course it would use a much more optimal technique of implementing continuations (I think I remember seeing a large C file which implemented the guts of continuations).
I've also been studying Marc's "90
On Fri, May 22, 2009 at 11:49 AM, James Long longster@gmail.com wrote:
On Fri, May 22, 2009 at 11:07 AM, D.McClain dbm@asyrmatos.com wrote:
I have speculated for some time now too, after experiments with CPS based compilers on a number of occasions, that the slowdown must be due to two things:
This makes sense. Somehow I became convinced that Gambit used CPS since continuations are so well-treated, but of course it would use a much more optimal technique of implementing continuations (I think I remember seeing a large C file which implemented the guts of continuations).
I've also been studying Marc's "90
oops, sent that before I finished it (thank you gmail shortcuts...). I've been studying Marc's "90 Minute Scheme to C compiler" presentation, which got me on a CPS kick.
Date: Fri, 22 May 2009 08:07:26 -0700 From: "D.McClain" dbm@asyrmatos.com
I have speculated for some time now too, after experiments with CPS based compilers on a number of occasions, that the slowdown must be due to two things:
1. All functions are now forced to accommodate the continuation parameter, whereas before a large majority of functions where niladic or unary operations,
2. The creation of the continuation arguments requires the production of a closure, which is inherently somewhat expensive.
My own work has consistently shown a 30% slowdown, independent of actual language of implementation -- be it Scheme, Lisp, or OCaml.
The use of CPS as an intermediate representation in a compiler is a red herring. It doesn't make a general difference in the performance of programs that the compiler compiles; it makes a difference only in the convenience of writing the compiler, by putting the compiler data structures into a simpler form. Two compilers can produce the same output for any given input even if one uses CPS as an intermediate representation and the other uses a completely direct style, or ANF, or SSA, or what-have-you. The use of CPS as an intermediate representation moreover has no bearing on the performance of CWCC or the representation of reified continuations at run-time.
If you observed a difference in performance between two compilers of which one uses CPS and the other does not, then you observed a difference other than the intermediate representation. For example, if you start with a compiler C, and then construct a compiler C' that first CPS-converts a program and then applies compiler C to the CPS form of the program, it will probably be the case that compiler A' generates worse code. Compilers often make stronger assumptions about continuations than about other procedures, by which continuations can be made less expensive than ordinary procedures; thus if you give a compiler a program in which continuations are not distinguished from user procedures, it can't (easily) make these assumptions, and will be forced to generate worse code for continuations than it would have generated for the original direct-style program.
The two points that you observed are inherent in any implementation of a sequential programming language with nested procedure calls. Every procedure must take a continuation and every continuation must be allocated somewhere; usually this happens in a region of memory called the stack, because continuations as a data structure behave in a stack-like manner most of the time. This is also why it is a trifle silly to say that a programming language `has continuations' -- any sequential programming language the concept; what most lack is the ability of programs to reify continuations. But this is not a reason why the compilers you tested performed differently -- every compiler, whether it use CPS or another intermediate representation, must conceptually add a continuation parameter to each procedure and allocate storage for continuation environments for each nested procedure call.
Very interesting... I would like to see your claims backed up by actual measurements.
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 22, 2009, at 12:10, Taylor R Campbell wrote:
Date: Fri, 22 May 2009 08:07:26 -0700 From: "D.McClain" dbm@asyrmatos.com
I have speculated for some time now too, after experiments with CPS based compilers on a number of occasions, that the slowdown must be due to two things:
- All functions are now forced to accommodate the continuation
parameter, whereas before a large majority of functions where niladic or unary operations,
- The creation of the continuation arguments requires the
production of a closure, which is inherently somewhat expensive.
My own work has consistently shown a 30% slowdown, independent of actual language of implementation -- be it Scheme, Lisp, or OCaml.
The use of CPS as an intermediate representation in a compiler is a red herring. It doesn't make a general difference in the performance of programs that the compiler compiles; it makes a difference only in the convenience of writing the compiler, by putting the compiler data structures into a simpler form. Two compilers can produce the same output for any given input even if one uses CPS as an intermediate representation and the other uses a completely direct style, or ANF, or SSA, or what-have-you. The use of CPS as an intermediate representation moreover has no bearing on the performance of CWCC or the representation of reified continuations at run-time.
If you observed a difference in performance between two compilers of which one uses CPS and the other does not, then you observed a difference other than the intermediate representation. For example, if you start with a compiler C, and then construct a compiler C' that first CPS-converts a program and then applies compiler C to the CPS form of the program, it will probably be the case that compiler A' generates worse code. Compilers often make stronger assumptions about continuations than about other procedures, by which continuations can be made less expensive than ordinary procedures; thus if you give a compiler a program in which continuations are not distinguished from user procedures, it can't (easily) make these assumptions, and will be forced to generate worse code for continuations than it would have generated for the original direct-style program.
The two points that you observed are inherent in any implementation of a sequential programming language with nested procedure calls. Every procedure must take a continuation and every continuation must be allocated somewhere; usually this happens in a region of memory called the stack, because continuations as a data structure behave in a stack-like manner most of the time. This is also why it is a trifle silly to say that a programming language `has continuations' -- any sequential programming language the concept; what most lack is the ability of programs to reify continuations. But this is not a reason why the compilers you tested performed differently -- every compiler, whether it use CPS or another intermediate representation, must conceptually add a continuation parameter to each procedure and allocate storage for continuation environments for each nested procedure call.
Date: Fri, 22 May 2009 12:18:49 -0700 From: "D.McClain" dbm@asyrmatos.com
Very interesting... I would like to see your claims backed up by actual measurements.
How about an isomorphism between the set of direct-style Scheme programs and (a subset of) the set of CPS Scheme programs? Then if you give me a compiler C, I can construct a compiler C' that uses a CPS intermediate representation and such that C(P) = C'(P) for any program P. No measurement is necessary to observe that the code generated by C and C' is identical.
Put another way, turning a program into CPS doesn't add to or remove from the information in a program. So it doesn't make a compiler any more or less able to make assumptions about a program that enable it to generate better or worse code. All it changes is how convenient it is to write the compiler. It's just a way to lay out some data structures in the compiler. If someone told you that a compiler turned programs into an XML-based intermediate representation, would you believe that person if he claimed that the use of XML caused the compiler to generate bad code? (It might be indicative of incompetent software engineering on the part of the compiler's writers, but that's a different issue.)
Okay, you have my attention... let's see that isomorphism in practice. I have seen my own measurements, and invariably they produce code that is 30% slower for CPS form than direct form. Perhaps the compilers producing the actual native code have been tuned to look for common human idioms and not CPS traits?
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 22, 2009, at 12:38, Taylor R Campbell wrote:
Date: Fri, 22 May 2009 12:18:49 -0700 From: "D.McClain" dbm@asyrmatos.com
Very interesting... I would like to see your claims backed up by actual measurements.
How about an isomorphism between the set of direct-style Scheme programs and (a subset of) the set of CPS Scheme programs? Then if you give me a compiler C, I can construct a compiler C' that uses a CPS intermediate representation and such that C(P) = C'(P) for any program P. No measurement is necessary to observe that the code generated by C and C' is identical.
Put another way, turning a program into CPS doesn't add to or remove from the information in a program. So it doesn't make a compiler any more or less able to make assumptions about a program that enable it to generate better or worse code. All it changes is how convenient it is to write the compiler. It's just a way to lay out some data structures in the compiler. If someone told you that a compiler turned programs into an XML-based intermediate representation, would you believe that person if he claimed that the use of XML caused the compiler to generate bad code? (It might be indicative of incompetent software engineering on the part of the compiler's writers, but that's a different issue.)
Date: Fri, 22 May 2009 12:45:59 -0700 From: "D.McClain" dbm@asyrmatos.com
Okay, you have my attention... let's see that isomorphism in practice.
I leave that as an exercise for the reader.
I have seen my own measurements, and invariably they produce code that is 30% slower for CPS form than direct form. Perhaps the compilers producing the actual native code have been tuned to look for common human idioms and not CPS traits?
The way you say that suggests to me that you are using the *same* compiler to compare a direct-style program with the same program converted to continuation-passing style. Unless the compiler is extremely clever, it will probably generate worse code for the CPS form of the program, for the reason I explained in my first message.
That's a very different question, however, from the question of how the use of a CPS intermediate representation affects the code that a compiler generates.
Bingo!
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 22, 2009, at 12:51, Taylor R Campbell wrote:
Date: Fri, 22 May 2009 12:45:59 -0700 From: "D.McClain" dbm@asyrmatos.com
Okay, you have my attention... let's see that isomorphism in practice.
I leave that as an exercise for the reader.
I have seen my own measurements, and invariably they produce code that is 30% slower for CPS form than direct form. Perhaps the compilers producing the actual native code have been tuned to look for common human idioms and not CPS traits?
The way you say that suggests to me that you are using the *same* compiler to compare a direct-style program with the same program converted to continuation-passing style. Unless the compiler is extremely clever, it will probably generate worse code for the CPS form of the program, for the reason I explained in my first message.
That's a very different question, however, from the question of how the use of a CPS intermediate representation affects the code that a compiler generates.
...thinking back on earlier experiments, I have seen just about the same degree of slowdown when converting from applicative style to pure functional lazy style (a la Haskell). So that leads me to conclude further that it is in the production and subsequent handling of closures that costs the speed.
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 22, 2009, at 13:04, D.McClain wrote:
Bingo!
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 22, 2009, at 12:51, Taylor R Campbell wrote:
Date: Fri, 22 May 2009 12:45:59 -0700 From: "D.McClain" dbm@asyrmatos.com
Okay, you have my attention... let's see that isomorphism in practice.
I leave that as an exercise for the reader.
I have seen my own measurements, and invariably they produce code that is 30% slower for CPS form than direct form. Perhaps the compilers producing the actual native code have been tuned to look for common human idioms and not CPS traits?
The way you say that suggests to me that you are using the *same* compiler to compare a direct-style program with the same program converted to continuation-passing style. Unless the compiler is extremely clever, it will probably generate worse code for the CPS form of the program, for the reason I explained in my first message.
That's a very different question, however, from the question of how the use of a CPS intermediate representation affects the code that a compiler generates.
Fixed, unary or binary arity functions are unlikely to be very expensive to me. Considering you know those functions, I believe that you can get some pretty good optimisations (inlining, special calling conventions, etc).
Also, the fact that the compiler handles very well continuations and that it does not transform code to CPS at some time are not that much related. Maybe the compiler itself is written in CPS (look for Marc's "using closures for code generation"). (This method works great for interpreters. I don't know how this works for compilers in fact…)
As for haskell −or any language− the problem is probably you. You assume some equivalences exist and that efficiency should be therefore mapped. Pure functional lazy style code is bound to be slower unless deep knowldegde of the semantics of the language, of the compiler and of its optimisations are well known. Haskell easily makes code aesthetic, but only *good* programmers make haskell code efficient (eventhough the ghc compiler does really good optimisations).
My 2¢.
P!
2009/5/23 D.McClain dbm@asyrmatos.com:
...thinking back on earlier experiments, I have seen just about the same degree of slowdown when converting from applicative style to pure functional lazy style (a la Haskell). So that leads me to conclude further that it is in the production and subsequent handling of closures that costs the speed.
I just read an interesting paper entitiled "Towards a Portable and Mobile Scheme Interpreter", describing the interpreter called Mobit, and which can serialize closures, continuations, and new functions across a channel to cooperating Termite sessions. Is this work still ongoing / available ?
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 23, 2009, at 02:06, Adrien Piérard wrote:
Fixed, unary or binary arity functions are unlikely to be very expensive to me. Considering you know those functions, I believe that you can get some pretty good optimisations (inlining, special calling conventions, etc).
Also, the fact that the compiler handles very well continuations and that it does not transform code to CPS at some time are not that much related. Maybe the compiler itself is written in CPS (look for Marc's "using closures for code generation"). (This method works great for interpreters. I don't know how this works for compilers in fact…)
As for haskell −or any language− the problem is probably you. You assume some equivalences exist and that efficiency should be therefore mapped. Pure functional lazy style code is bound to be slower unless deep knowldegde of the semantics of the language, of the compiler and of its optimisations are well known. Haskell easily makes code aesthetic, but only *good* programmers make haskell code efficient (eventhough the ghc compiler does really good optimisations).
My 2¢.
P!
2009/5/23 D.McClain dbm@asyrmatos.com:
...thinking back on earlier experiments, I have seen just about the same degree of slowdown when converting from applicative style to pure functional lazy style (a la Haskell). So that leads me to conclude further that it is in the production and subsequent handling of closures that costs the speed.
-- Français, English, 日本語, 한국어
I shall definitely put it online… I'll try to make that happen within a week.
P!
2009/5/23 D.McClain dbm@asyrmatos.com:
I just read an interesting paper entitiled "Towards a Portable and Mobile Scheme Interpreter", describing the interpreter called Mobit, and which can serialize closures, continuations, and new functions across a channel to cooperating Termite sessions. Is this work still ongoing / available ?
On 22-May-09, at 9:51 AM, Alex Queiroz wrote:
Hallo,
On 5/22/09, Marc Feeley feeley@iro.umontreal.ca wrote:
On 21-May-09, at 4:54 PM, James Long wrote:
`gsc' supports -expansion, but it seems to show the basic expansion into normalized gambit code. Is there any way to show the fully CPS-transformed forms of the code?
Nope. Gambit does not transform code to CPS style.
Also more of a curiosity, why? Is CPS irremediably slower? I've
been reading a lot about compiling lately...
In the context of a full implementation of Scheme (with first class continuations), the use of CPS as an intermediate representation:
1) Simplifies writing a simple non-optimizing compiler (see the 90 minutes Scheme compiler). Why? Because first-class continuations come "for free".
2) Make it more difficult (but not impossible) to write an optimizing compiler. Why? Because an advanced static analysis is needed to determine which closures can be managed in a "stack like" manner.
In a multithreaded Scheme like Gambit which implements threads using first class continuations, this static analysis would end up determining that none of the continuation closures can be managed in a "stack like" manner (they would be allocated on the heap, which would put more pressure on the garbage collector, and likely decrease overall performance).
But in practice, very frequently continuation frames are not captured, and could be managed on a stack. Gambit uses this fact to implement a dynamic conservative frame lifetime analysis. Basically, when a continuation is captured due to a call to call/cc or a thread context switch, the current content of the stack is logically transferred to the heap (this requires a few simple pointer updates, and no copying).
It would still be possible to do this with a CPS intermediate representation as long as the closures corresponding to continuations would be marked specially.
Marc
Hallo,
On 5/23/09, Marc Feeley feeley@iro.umontreal.ca wrote:
In the context of a full implementation of Scheme (with first class continuations), the use of CPS as an intermediate representation:
- Simplifies writing a simple non-optimizing compiler (see the 90 minutes
Scheme compiler). Why? Because first-class continuations come "for free".
- Make it more difficult (but not impossible) to write an optimizing
compiler. Why? Because an advanced static analysis is needed to determine which closures can be managed in a "stack like" manner.
Thanks for the thorough explanation, Marc. Would you say that the harder to implement advanced static analysis required would pay off in the end, generally speaking? Well, this is kind of off-topic, but I'm playing with a toy scheme compiler based on LiSP and Dybvig's PhD, so I am curious. If this is unacceptable, let me know.
Cheers,