The history of T article (http://www.paulgraham.com/thist.html) has some interesting things to say about intermediate representations, and SSA vs CPS in particular. Olin Shivers thinks that CPS representation is better than ANF (A-normal form). He also says that SSA is a rediscovery of CPS representation, under a different notation.
I highly recommend you all read it.
Marc
Afficher les réponses par date
The history of T article (http://www.paulgraham.com/thist.html) has
some interesting things to say about intermediate representations, and SSA vs CPS in particular. Olin Shivers thinks that CPS representation is better than ANF (A-normal form). He also says that SSA is a rediscovery of CPS representation, under a different notation.
Are you suggesting we use CPS for Tachyon?
I don't think that the SSA formalism having been discovered/invented later than CPS really disqualifies CPS in any way. It has been said that CPS and SSA are (more or less) equivalent. However, I still think CPS is less practical than SSA for imperative languages, and that JavaScript is more like Java, Python or Ruby than it is like Scheme. SSA has the benefit of being more human-readable than CPS too ;)
I highly recommend you all read it.
I read it and found the following quote interesting:
"Richard Kelsey took his front end, which was a very aggressive CPS-based optimiser, and extended it all the way down to the ground to produce a complete, second compiler, which he called "TC" for the "Transformational Compiler." His approach was simply to keep transforming the program from one simple, CPS, lambda language to an even simpler one, until the language was so simple it only had 16 variables... r1 through r15, at which time you could just kill the lambdas and call it assembler"
This is similar to what I was suggesting for our IR. Have the front-end produce a CFG with SSA, then analyze, transform and optimize it all the way until we have assembler inside a CFG. I would ideally like for our IR/analysis/optimizations to fit within a model that is as unified as possible. This will make the compiler simpler to implement and extend.
- Maxime
On 2010-05-17, at 4:44 PM, Maxime Chevalier-Boisvert wrote:
The history of T article (http://www.paulgraham.com/thist.html) has some interesting things to say about intermediate representations, and SSA vs CPS in particular. Olin Shivers thinks that CPS representation is better than ANF (A-normal form). He also says that SSA is a rediscovery of CPS representation, under a different notation.
Are you suggesting we use CPS for Tachyon?
No.
I don't think that the SSA formalism having been discovered/invented later than CPS really disqualifies CPS in any way. It has been said that CPS and SSA are (more or less) equivalent. However, I still think CPS is less practical than SSA for imperative languages, and that JavaScript is more like Java, Python or Ruby than it is like Scheme. SSA has the benefit of being more human-readable than CPS too ;)
CPS is just as easy as ANF, as long as you don't indent every continuation. I.e.:
(add x y (lambda (r1) (sub r1 z (lambda (r2) ...
=>
(add x y (lambda (r1) (sub r1 z (lambda (r2) ...
which is just
r1 <= (add x y) r2 <= (sub r1 z) ...
I highly recommend you all read it.
I read it and found the following quote interesting:
"Richard Kelsey took his front end, which was a very aggressive CPS-based optimiser, and extended it all the way down to the ground to produce a complete, second compiler, which he called "TC" for the "Transformational Compiler." His approach was simply to keep transforming the program from one simple, CPS, lambda language to an even simpler one, until the language was so simple it only had 16 variables... r1 through r15, at which time you could just kill the lambdas and call it assembler"
Yes, I've used this technique in one of my compiler classes. The AST is transformed in several passes, and in the end, there is a direct mapping between the variables in the AST and machine registers.
This is similar to what I was suggesting for our IR. Have the front-end produce a CFG with SSA, then analyze, transform and optimize it all the way until we have assembler inside a CFG. I would ideally like for our IR/analysis/optimizations to fit within a model that is as unified as possible. This will make the compiler simpler to implement and extend.
Marc
A few weeks ago, I was telling you about my idea of an interprocedural type analysis which wouldn't analyze functions that don't actually get executed. Yesterday and today, I was doing some more brainstorming about these ideas, and I think found some possible flaws:
1. Until now, I have been assuming that we can incrementally, dynamically build a complete call graph for the program, with edges between all methods that have called each other so far. I think this is feasible in practice, but in theory, you can imagine a scenario in which a program contains a cluster of 10000 methods that each call one another... This results in 100 million call edges. This would both take quite a bit of memory to store, and tremendously slow down the analysis.
2. I would like to incorporate function versioning into the system based on argument types. This would result in call graphs which can contain multiple instance of a function, with many different argument type strings. The problem here is that we need to initially record all possible argument type strings to know all possible versions of a function. Once again, there could be a bajillion versions in the call graph. There is also the problem that building/recording argument type strings on *every* function call before the program gets optimized could slow down the code by a tremendous factor.
3. We have no idea how fast the analysis will be. So far, I've been assuming it could be done in ~10 seconds, but that is a guess, and perhaps an optimistic one. Some JavaScript programs out there contain thousands of functions, and if the analysis was to take several minutes, it would make this approach much less interesting, even if we can serialize the optimized code, because its quite possible many web applications won't even run that long.
So I've given it some more thought, and I think that possibly, we can settle for only analyzing the K "hottest" functions in the program, and use stochastic profiling to record argument types, return types, global variable types and object field types. This would allow the profiling code to run faster, because it doesn't need to record everything it does, but only say, 1-5% of stores to globals/fields. It would also make the analysis faster, because we can put an upper bound on how many functions we analyze and optimize, and how many versions of a function we're willing to instantiate.
One "hiccup", however, is that since we're doing optimization based on profiling information we gathered stochastically, and we didn't analyze the whole program, we need to verify that global variables and object field types all have the types we expect before patching in the optimized functions that make assumptions based on this data. This would imply some kind of heap traversal to look at the fields of all live objects (mark/sweep without the sweep). It would also imply we need to make sure that the unoptimized functions check that the types they return and assign to globals/fields match what we assume. This can possibly be accomplished with code patching.
As for the heap traversal, we can possibly enable safety checks in unoptimized functions first, and then begin doing the heap scan concurrently with the mutator process, to avoid a pause. The safety checks in unoptimized code might seem like they will be very slow, but we're assuming we've already optimized the functions that make up most of the execution time. We can also keep optimizing less hot functions in parallel with the program, making this less and less of an issue as the program keeps running and being more and more optimized.
What do you guys think?
- Maxime