There is a very revealing paper by Clinger, Hartheimer, and Ost, "Implementations Strategies for First Class Continuations", that provides detailed examinations of various strategies, the code generated to support them, and actual measurements showing the effects of each implementation variation.
http://www.springerlink.com/content/h5808n962434j275/fulltext.pdf
This paper, along with Marc's recent explanation about the cost of CPS, should go a long way toward explaining why CPS code may be slower than direct form code.
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
Afficher les réponses par date
2009/5/23 D.McClain dbm@asyrmatos.com:
should go a long way toward explaining why CPS code may be slower than direct form code.
I have not yet read the paper, but it appears that there is a terminological disconnect happening here.
In Scheme, I program using CPS precisely because it is in fact *faster*. Specifically, I have show it to be faster when compiling using Stalin. However, this performance increase is not a direct result of the CPS idiom, but due to the fact that monomorphic data extents are clearly defined - allowing Stalin more aggressive optimizations.
Note that I am *not* using CPS to implement *continuations*. But I am using the CPS idiom in cases where a caller knows quite a bit more about type-appropriate behavior than the (ultimate) callee. Such conditions are often used as examples of when call/cc is helpful (e.g. in implementing exceptions).
The point being that there is a long difference between programming in a CPS idiom and reifying continuations with call/cc. Saying CPS code is slower is simply silly and makes you look like a troll.
david rush
Yes, I notice you guys are quick to assume that I must be a troll and try to shoot me down. I was getting the impression that I was addressing an audience of 15-year olds.
FYI, I am an astrophysicist by training, but I have been doing computers in anger for more than 40 years. My primary interests are in applied and embedded computing, not comp sci per se, although I do also have some graduate work in CompSci.
Hence, I am most impressed by actual implementations and measurements. Concrete examples of theoretical topics are most welcome. But sweeping theoretical statements are really not very impressive.
Cheers,
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 24, 2009, at 00:37, David Rush wrote:
2009/5/23 D.McClain dbm@asyrmatos.com:
should go a long way toward explaining why CPS code may be slower than direct form code.
I have not yet read the paper, but it appears that there is a terminological disconnect happening here.
In Scheme, I program using CPS precisely because it is in fact *faster*. Specifically, I have show it to be faster when compiling using Stalin. However, this performance increase is not a direct result of the CPS idiom, but due to the fact that monomorphic data extents are clearly defined - allowing Stalin more aggressive optimizations.
Note that I am *not* using CPS to implement *continuations*. But I am using the CPS idiom in cases where a caller knows quite a bit more about type-appropriate behavior than the (ultimate) callee. Such conditions are often used as examples of when call/cc is helpful (e.g. in implementing exceptions).
The point being that there is a long difference between programming in a CPS idiom and reifying continuations with call/cc. Saying CPS code is slower is simply silly and makes you look like a troll.
david rush
GPG Public key at http://cyber-rush.org/drr/gpg-public-key.txt
Okay, when I write articles for you guys, Paul Hudak insists that I avoid or explain terms like Zernike Polynomials, Interferometry, etc. Let's try to reach some communicative neutral ground here...
Delimiting "monomorphic data extents" speeding up compiled code sounds interesting. Could you please explain what this really means?
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
2009/5/24 D.McClain dbm@asyrmatos.com:
Delimiting "monomorphic data extents" speeding up compiled code sounds interesting. Could you please explain what this really means?
consider:
(define (assq-k tag a-list k-fail k-success) ; ML type: symbol -> (symbol . 'a) list -> (symbol -> 'b) -> ((symbol . 'a) -> 'c) -> 'd (cond ((null? a-list) (k-fail tag)) ((eq? tag (caar a-list)) (k-success (car a-list))) (else (assq-k tag (cdr a-list) k-fail k-success) ))
(define (assq tag a-list) ; ML type: symbol -> (symbol . 'a) list -> (boolean | (symbol . 'a)) (assq-k tag a-list (lambda (x) #f) (lambda (x) x)))
In ASSQ-K all of the types are well-determined: there is no additional type checking introduced by its application, because the scope of all introduced type variance is limited to the continuation functions K-FAIL and K-SUCCESS. This is not the case for ASSQ because it introduces a union type which must be subsequently discriminated either by user or run-time system (in the case of an error) code.
Admittedly this is a trivial example, but the principle does scale up to more complicated APIs. I use it heavily in network and DB programming. It also turns out to be a lovely way to design in error-handling at the very beginning of a project - something that *always* turns out to be a pain to retrofit. Using explicit CPS also turns out to be a lovely way to replace CALL-WITH-VALUES because you avoid the extra machinery implicit in the schizophrenic Scheme semantics for that function.
Strictly speaking though, none of this has anything to do with whether a compiler uses a CPS-based internal representation. All of the above has to do with realising and taking advantage of the expressive capabilities of a language with full tail call optimization. Very few people seem to get this, even in the hard-core Scheme world.
david rush
Thank you for sharing this article. I don't think you are a troll, and, unfortunately, my mentality is of 15 old.
On Sun, May 24, 2009 at 5:37 AM, D.McClain dbm@asyrmatos.com wrote:
Yes, I notice you guys are quick to assume that I must be a troll and try to shoot me down. I was getting the impression that I was addressing an audience of 15-year olds. FYI, I am an astrophysicist by training, but I have been doing computers in anger for more than 40 years. My primary interests are in applied and embedded computing, not comp sci per se, although I do also have some graduate work in CompSci. Hence, I am most impressed by actual implementations and measurements. Concrete examples of theoretical topics are most welcome. But sweeping theoretical statements are really not very impressive. Cheers, Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
On May 24, 2009, at 00:37, David Rush wrote:
2009/5/23 D.McClain dbm@asyrmatos.com:
should go a long way toward explaining why CPS code may be slower than direct form code.
I have not yet read the paper, but it appears that there is a terminological disconnect happening here. In Scheme, I program using CPS precisely because it is in fact *faster*. Specifically, I have show it to be faster when compiling using Stalin. However, this performance increase is not a direct result of the CPS idiom, but due to the fact that monomorphic data extents are clearly defined - allowing Stalin more aggressive optimizations. Note that I am *not* using CPS to implement *continuations*. But I am using the CPS idiom in cases where a caller knows quite a bit more about type-appropriate behavior than the (ultimate) callee. Such conditions are often used as examples of when call/cc is helpful (e.g. in implementing exceptions). The point being that there is a long difference between programming in a CPS idiom and reifying continuations with call/cc. Saying CPS code is slower is simply silly and makes you look like a troll. david rush -- GPG Public key at http://cyber-rush.org/drr/gpg-public-key.txt
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list