A few weeks ago, I was telling you about my idea of an interprocedural type analysis which wouldn't analyze functions that don't actually get executed. Yesterday and today, I was doing some more brainstorming about these ideas, and I think found some possible flaws:
1. Until now, I have been assuming that we can incrementally, dynamically build a complete call graph for the program, with edges between all methods that have called each other so far. I think this is feasible in practice, but in theory, you can imagine a scenario in which a program contains a cluster of 10000 methods that each call one another... This results in 100 million call edges. This would both take quite a bit of memory to store, and tremendously slow down the analysis.
2. I would like to incorporate function versioning into the system based on argument types. This would result in call graphs which can contain multiple instance of a function, with many different argument type strings. The problem here is that we need to initially record all possible argument type strings to know all possible versions of a function. Once again, there could be a bajillion versions in the call graph. There is also the problem that building/recording argument type strings on *every* function call before the program gets optimized could slow down the code by a tremendous factor.
3. We have no idea how fast the analysis will be. So far, I've been assuming it could be done in ~10 seconds, but that is a guess, and perhaps an optimistic one. Some JavaScript programs out there contain thousands of functions, and if the analysis was to take several minutes, it would make this approach much less interesting, even if we can serialize the optimized code, because its quite possible many web applications won't even run that long.
So I've given it some more thought, and I think that possibly, we can settle for only analyzing the K "hottest" functions in the program, and use stochastic profiling to record argument types, return types, global variable types and object field types. This would allow the profiling code to run faster, because it doesn't need to record everything it does, but only say, 1-5% of stores to globals/fields. It would also make the analysis faster, because we can put an upper bound on how many functions we analyze and optimize, and how many versions of a function we're willing to instantiate.
One "hiccup", however, is that since we're doing optimization based on profiling information we gathered stochastically, and we didn't analyze the whole program, we need to verify that global variables and object field types all have the types we expect before patching in the optimized functions that make assumptions based on this data. This would imply some kind of heap traversal to look at the fields of all live objects (mark/sweep without the sweep). It would also imply we need to make sure that the unoptimized functions check that the types they return and assign to globals/fields match what we assume. This can possibly be accomplished with code patching.
As for the heap traversal, we can possibly enable safety checks in unoptimized functions first, and then begin doing the heap scan concurrently with the mutator process, to avoid a pause. The safety checks in unoptimized code might seem like they will be very slow, but we're assuming we've already optimized the functions that make up most of the execution time. We can also keep optimizing less hot functions in parallel with the program, making this less and less of an issue as the program keeps running and being more and more optimized.
What do you guys think?
- Maxime