On 2011-05-06, at 1:29 PM, chevalma@iro.umontreal.ca wrote:
"That one optimization" is slowing down our code enormously. I would estimate that implementing this optimization will make calling global functions 10 to 20 times faster. The only instruction that needs special support is the call instruction (and the putprop/getprop primitives).
I think we all agree about the performance issue. My point is simply that we can possibly do better than providing special support for just call and those primitives. We could eventually directly inline functions, as V8 does, for example, and use code patching to replace the inlined code by something else.
This seems to contradict what you said above. I suggested that we make the call instruction patchable and you said you don't like the idea. Can you explain your point better?
I'd like to have a patching mechanism that is more generic. Applicable to more than one instruction and to a variety of optimizations.
In the design of the code patching mechanism you can't avoid starting with no code patching features and growing that to a complete set of code patching features. In other words, an incremental design process where a code patching optimization opportunity is identified, and then a way to achieve it is designed, and this is iterated.
Generalizing the code patching mechanism is interesting. However, premature generalization is not a good idea because you can end up with a too high generalization that is hard to implement, expensive to execute, and not able to address some code patching situations (because there is no such thing as a fully general generalization). It is better to start addressing our current needs, and adapt as our needs grow, eventually generalizing this when we have a more complete understanding of our needs and the experience of doing code patching.
I am not suggesting that we limit the code patching mechanism to just calling global functions. It is merely a starting point. Something that we must clearly address. Other code patching cases will surely appear as we continue adding optimizations, and possibly in the near future.
This is easy enough. By the way I wouldn't call it a conditional branch... it is a changeable unconditional branch. What use do you have for this?
Perhaps we could call it a toggle. It could be used for a variety of things. In this case, inlining a global function at a call site, and later, if someone redefines the global function, patching the branch to instead do a function call. Other possible uses are to enable or disable profiling code, enable or disable inline caching, enable or disable type optimizations, toggle on-stack replacement when a given point in the code is reached.
Note that the branch will have a run-time cost, wherever the branch points to. So when generating the IR we have to consciously introduce a changeable branch (and be willing to pay the run-time cost), in the hope that it will save time later on. So... what kind of optimization do you want to do with this?
Potentially we could order the blocks so that in the default case, the more likely "default" block follows immediately after. Possibly, there can be no gap in the instructions. Toggling would write a jump directly over the immediately following block. Toggling back would write back what was at the beginning of that block over the jump.
All of this is speculative and seems hard to do. The kind of thing that is interesting to research but takes time.
Right now (working toward milestone 2 of the project) our objective is to improve the performance of Tachyon so that it is competitive with other JS VMs. Let's not neglect the low hanging fruit which will bring us closer to our objective. Research on a general code patching mechanism can be done after milestone 2.
Marc