At the last meeting I mentioned that I was worried about the performance of the getprop operation when calling global functions (such as in fib). Probably the biggest performance "mountain" at this point. So I wrote a simple benchmark to test how Tachyon performs when calling global functions:
function f0() { } function f1() { f0(); f0(); f0(); f0(); } function f2() { f1(); f1(); f1(); f1(); } function f3() { f2(); f2(); f2(); f2(); } function f4() { f3(); f3(); f3(); f3(); } function f5() { f4(); f4(); f4(); f4(); } function f6() { f5(); f5(); f5(); f5(); } function f7() { f6(); f6(); f6(); f6(); } function f8() { f7(); f7(); f7(); f7(); } function f9() { f8(); f8(); f8(); f8(); } function f10() { f9(); f9(); f9(); f9(); } function f11() { f10(); f10(); f10(); f10(); } function f12() { f11(); f11(); f11(); f11(); } function f13() { f12(); f12(); f12(); f12(); } function f14() { f13(); f13(); f13(); f13(); } function f15() { f14(); f14(); f14(); f14(); } f15();
That code only does function calls, there is no arithmetic or loops. So the performance should be directly linked to the cost of calling global functions. Here are the running times for Tachyon and other JS VMs (relative times in parentheses):
V8 1.700s ( 1.0) Safari (WebKit) 8.753s ( 5.1) Firefox (SpiderMonkey) 10.354s ( 6.1) Tachyon 111.240s (65.4)
Two things come to mind:
- V8 is really doing well compared to the other commercial JS VMs - Tachyon is doing really poorly, it is an order of magnitude slower than commercial JS VMs and almost two orders of magnitude slower than V8
This is surprising because on large JS benchmarks (such as the Tachyon compiler), the performance difference is not so great (Tachyon is "only" 10 times slower than V8), and on fib, Tachyon is 4 times slower than V8.
In any case, I think this shows clearly that Tachyon's mechanism for calling global functions is particularly inefficient. We need to fix that!
Marc
Afficher les réponses par date
On 2011-05-05, at 9:18 PM, Marc Feeley wrote:
At the last meeting I mentioned that I was worried about the performance of the getprop operation when calling global functions (such as in fib). Probably the biggest performance "mountain" at this point. So I wrote a simple benchmark to test how Tachyon performs when calling global functions:
function f0() { } function f1() { f0(); f0(); f0(); f0(); } function f2() { f1(); f1(); f1(); f1(); } function f3() { f2(); f2(); f2(); f2(); } function f4() { f3(); f3(); f3(); f3(); } function f5() { f4(); f4(); f4(); f4(); } function f6() { f5(); f5(); f5(); f5(); } function f7() { f6(); f6(); f6(); f6(); } function f8() { f7(); f7(); f7(); f7(); } function f9() { f8(); f8(); f8(); f8(); } function f10() { f9(); f9(); f9(); f9(); } function f11() { f10(); f10(); f10(); f10(); } function f12() { f11(); f11(); f11(); f11(); } function f13() { f12(); f12(); f12(); f12(); } function f14() { f13(); f13(); f13(); f13(); } function f15() { f14(); f14(); f14(); f14(); } f15();
That code only does function calls, there is no arithmetic or loops. So the performance should be directly linked to the cost of calling global functions. Here are the running times for Tachyon and other JS VMs (relative times in parentheses):
V8 1.700s ( 1.0) Safari (WebKit) 8.753s ( 5.1) Firefox (SpiderMonkey) 10.354s ( 6.1) Tachyon 111.240s (65.4)
Two things come to mind:
- V8 is really doing well compared to the other commercial JS VMs
- Tachyon is doing really poorly, it is an order of magnitude slower than commercial JS VMs and almost two orders of magnitude slower than V8
This is surprising because on large JS benchmarks (such as the Tachyon compiler), the performance difference is not so great (Tachyon is "only" 10 times slower than V8), and on fib, Tachyon is 4 times slower than V8.
In any case, I think this shows clearly that Tachyon's mechanism for calling global functions is particularly inefficient. We need to fix that!
Marc
By the way I just tried the following x86 assembler program, which is equivalent:
f0: ret f1: call f0; call f0; call f0; call f0; ret f2: call f1; call f1; call f1; call f1; ret f3: call f2; call f2; call f2; call f2; ret f4: call f3; call f3; call f3; call f3; ret f5: call f4; call f4; call f4; call f4; ret f6: call f5; call f5; call f5; call f5; ret f7: call f6; call f6; call f6; call f6; ret f8: call f7; call f7; call f7; call f7; ret f9: call f8; call f8; call f8; call f8; ret f10: call f9; call f9; call f9; call f9; ret f11: call f10; call f10; call f10; call f10; ret f12: call f11; call f11; call f11; call f11; ret f13: call f12; call f12; call f12; call f12; ret f14: call f13; call f13; call f13; call f13; ret f15: call f14; call f14; call f14; call f14; ret main: call f15; ret
I get an execution time of 2.2s which is more than what V8 takes... So I suspect that V8 is doing something funky like function inlining or elimination of useless code. In any case, Tachyon is still much slower than the other VMs which are probably not doing such optimizations.
Marc
V8 is really doing well compared to the other commercial JS VMs
V8 started out as a pure JIT, and probably has a more solid design and optimization pipeline than the other two, which started out as bytecode interpreters, and eventually had a JIT compiler tacked on top... Which still optimizes and compiles bytecode.
This is surprising because on large JS benchmarks (such as the Tachyon compiler), the performance difference is not so great (Tachyon is "only" 10 times slower than V8), and on fib, Tachyon is 4 times slower than V8.
Not so surprising. The Tachyon functions do actual work. Some of them can execute for a while. This microbenchmark only has function calls. So you're testing the thing we perform worst on.
On a side note. We know fetching from the global object is slow, but I wonder how slow function calls themselves are. We could get a bit of an idea by calling nested functions (closures) repeatedly. Those functions won't be fetched from an object, they're treated as local variables.
In any case, I think this shows clearly that Tachyon's mechanism for calling global functions is particularly inefficient. We need to fix that!
We will, and we'll improve the performance in several other areas as well. I'm confident we can improve the generated IR, improve the quality of our generated machine code on several levels, increase our level of JavaScript support and optimize global function calls before the end of the summer.
However, in the case of global function calls... I'd rather we not rush to implement this optimization in the backend in a tightly coupled way. In my opinion, we should first implement some generic code patching functionality in the IR and backend, which we can then use to implement this optimization as well as several others.
- Maxime
On 2011-05-06, at 12:23 AM, chevalma@iro.umontreal.ca wrote:
we should first implement some generic code patching functionality in the IR and backend, which we can then use to implement this optimization as well as several others.
I'm not sure what you have in mind as a "generic code patching functionality in the IR". The problem is that code patching is very low level (in the end you have to write real machine instructions). You need to know the size of the machine instructions, because code patching overwrites existing machine instructions with new machine instructions.
That seems like a hard problem at the IR level. There would need to be a way to point to individual IR instructions and "write" new IR instructions. It is not obvious how to do that efficiently (one inefficient way is to modify the IR of the function, convert all of the function to machine code, and overwrite the whole function, but that has problems like variation of the address of return points... I wouldn't even call it code "patching").
On the other hand, code patching is easier to do if it is transparent to the IR. In other words, the back-end guarantees that the IR instructions in the code stream are faithfully converted to some machine code, and that machine code may be optimized with code patching as long as the semantics of the IR instructions are maintained. For example, a "jump to function in the global object" IR instruction could be initially implemented as a call to a handler that looks up the name of the function in the global object, and when it is found, the call to the handler is replaced by a direct jump to the function (of course this optimization needs to reverted to the original code when that global property is stored to). So the IR of a function stays the same, it is the implementation of the IR using machine instructions that changes.
Marc
For example, a "jump to function in the global object" IR instruction could be initially implemented as a call to a handler that looks up the name of the function in the global object, and when it is found, the call to the handler is replaced by a direct jump to the function
I don't really like that idea because it seems to me to be rather tightly coupled with both the IR and the backend. It seems to me that you're suggesting having one or more specific IR instructions with corresponding backend support just for that one optimization. Not to mention wanting to perhaps have code directly in the global object or special function entry points again just for that one optimization.
I'm not sure what you have in mind as a "generic code patching functionality in the IR". The problem is that code patching is very low level (in the end you have to write real machine instructions). You need to know the size of the machine instructions, because code patching overwrites existing machine instructions with new machine instructions.
I'm not suggesting we directly have a mechanism to write over a sequence of IR instructions. I would propose that we instead brainstorm over what kinds of things we may want to do with code patching, and implement a small set of "patchable" IR instructions, so to speak. Some potential ideas:
- An IR value that can be patched/updated later. This might allow making a call site call a different functions at different times, or replacing a "constant" in a piece of code.
- A conditional branch (like an if instruction) that doesn't test any values and always branches the same way, until we toggle it to branch the other way instead (a patchable jump inside a function). Such patchable jumps could have more than two possible targets. This image comes to mind: http://visual.merriam-webster.com/images/transport-machinery/rail-transport/...
On 2011-05-06, at 12:48 PM, chevalma@iro.umontreal.ca wrote:
For example, a "jump to function in the global object" IR instruction could be initially implemented as a call to a handler that looks up the name of the function in the global object, and when it is found, the call to the handler is replaced by a direct jump to the function
I don't really like that idea because it seems to me to be rather tightly coupled with both the IR and the backend. It seems to me that you're suggesting having one or more specific IR instructions with corresponding backend support just for that one optimization. Not to mention wanting to perhaps have code directly in the global object or special function entry points again just for that one optimization.
"That one optimization" is slowing down our code enormously. I would estimate that implementing this optimization will make calling global functions 10 to 20 times faster. The only instruction that needs special support is the call instruction (and the putprop/getprop primitives).
I'm not sure what you have in mind as a "generic code patching functionality in the IR". The problem is that code patching is very low level (in the end you have to write real machine instructions). You need to know the size of the machine instructions, because code patching overwrites existing machine instructions with new machine instructions.
I'm not suggesting we directly have a mechanism to write over a sequence of IR instructions.
OK.
I would propose that we instead brainstorm over what kinds of things we may want to do with code patching, and implement a small set of "patchable" IR instructions, so to speak.
This seems to contradict what you said above. I suggested that we make the call instruction patchable and you said you don't like the idea. Can you explain your point better?
Some potential ideas:
- An IR value that can be patched/updated later. This might allow making a
call site call a different functions at different times, or replacing a "constant" in a piece of code.
Don't you think this is a "limited" form of code patching that will not allow us to do all the things that can be done with a lower level code patching. For example, for the optimization of function call, it is desirable to eliminate the setup of the argument count register, and skip the argument count check at the beginning of the function. So this means that some instructions need to be removed, which is more that changing a parameter/constant of an instruction.
- A conditional branch (like an if instruction) that doesn't test any
values and always branches the same way, until we toggle it to branch the other way instead (a patchable jump inside a function). Such patchable jumps could have more than two possible targets. This image comes to mind: http://visual.merriam-webster.com/images/transport-machinery/rail-transport/...
This is easy enough. By the way I wouldn't call it a conditional branch... it is a changeable unconditional branch. What use do you have for this? Note that the branch will have a run-time cost, wherever the branch points to. So when generating the IR we have to consciously introduce a changeable branch (and be willing to pay the run-time cost), in the hope that it will save time later on. So... what kind of optimization do you want to do with this?
Marc
"That one optimization" is slowing down our code enormously. I would estimate that implementing this optimization will make calling global functions 10 to 20 times faster. The only instruction that needs special support is the call instruction (and the putprop/getprop primitives).
I think we all agree about the performance issue. My point is simply that we can possibly do better than providing special support for just call and those primitives. We could eventually directly inline functions, as V8 does, for example, and use code patching to replace the inlined code by something else.
This seems to contradict what you said above. I suggested that we make the call instruction patchable and you said you don't like the idea. Can you explain your point better?
I'd like to have a patching mechanism that is more generic. Applicable to more than one instruction and to a variety of optimizations.
This is easy enough. By the way I wouldn't call it a conditional branch... it is a changeable unconditional branch. What use do you have for this?
Perhaps we could call it a toggle. It could be used for a variety of things. In this case, inlining a global function at a call site, and later, if someone redefines the global function, patching the branch to instead do a function call. Other possible uses are to enable or disable profiling code, enable or disable inline caching, enable or disable type optimizations, toggle on-stack replacement when a given point in the code is reached.
Note that the branch will have a run-time cost, wherever the branch points to. So when generating the IR we have to consciously introduce a changeable branch (and be willing to pay the run-time cost), in the hope that it will save time later on. So... what kind of optimization do you want to do with this?
Potentially we could order the blocks so that in the default case, the more likely "default" block follows immediately after. Possibly, there can be no gap in the instructions. Toggling would write a jump directly over the immediately following block. Toggling back would write back what was at the beginning of that block over the jump.
On 2011-05-06, at 1:29 PM, chevalma@iro.umontreal.ca wrote:
"That one optimization" is slowing down our code enormously. I would estimate that implementing this optimization will make calling global functions 10 to 20 times faster. The only instruction that needs special support is the call instruction (and the putprop/getprop primitives).
I think we all agree about the performance issue. My point is simply that we can possibly do better than providing special support for just call and those primitives. We could eventually directly inline functions, as V8 does, for example, and use code patching to replace the inlined code by something else.
This seems to contradict what you said above. I suggested that we make the call instruction patchable and you said you don't like the idea. Can you explain your point better?
I'd like to have a patching mechanism that is more generic. Applicable to more than one instruction and to a variety of optimizations.
In the design of the code patching mechanism you can't avoid starting with no code patching features and growing that to a complete set of code patching features. In other words, an incremental design process where a code patching optimization opportunity is identified, and then a way to achieve it is designed, and this is iterated.
Generalizing the code patching mechanism is interesting. However, premature generalization is not a good idea because you can end up with a too high generalization that is hard to implement, expensive to execute, and not able to address some code patching situations (because there is no such thing as a fully general generalization). It is better to start addressing our current needs, and adapt as our needs grow, eventually generalizing this when we have a more complete understanding of our needs and the experience of doing code patching.
I am not suggesting that we limit the code patching mechanism to just calling global functions. It is merely a starting point. Something that we must clearly address. Other code patching cases will surely appear as we continue adding optimizations, and possibly in the near future.
This is easy enough. By the way I wouldn't call it a conditional branch... it is a changeable unconditional branch. What use do you have for this?
Perhaps we could call it a toggle. It could be used for a variety of things. In this case, inlining a global function at a call site, and later, if someone redefines the global function, patching the branch to instead do a function call. Other possible uses are to enable or disable profiling code, enable or disable inline caching, enable or disable type optimizations, toggle on-stack replacement when a given point in the code is reached.
Note that the branch will have a run-time cost, wherever the branch points to. So when generating the IR we have to consciously introduce a changeable branch (and be willing to pay the run-time cost), in the hope that it will save time later on. So... what kind of optimization do you want to do with this?
Potentially we could order the blocks so that in the default case, the more likely "default" block follows immediately after. Possibly, there can be no gap in the instructions. Toggling would write a jump directly over the immediately following block. Toggling back would write back what was at the beginning of that block over the jump.
All of this is speculative and seems hard to do. The kind of thing that is interesting to research but takes time.
Right now (working toward milestone 2 of the project) our objective is to improve the performance of Tachyon so that it is competitive with other JS VMs. Let's not neglect the low hanging fruit which will bring us closer to our objective. Research on a general code patching mechanism can be done after milestone 2.
Marc
The code patching mechanism I proposed is fairly simple to implement and will be directly useful for my research, which I need to get started soon. Let's not rule out implementing something like that in the near term.
I'm 100% sure we can implement something specialized just for global function calls and get performance gains with a tightly coupled approach, but that won't teach us all that much. Being able to not only optimize global calls, but easily inline global functions with little extra effort, on the other hand, now that's interesting.
What I propose is as follows: 1. Implement the toggleable unconditional branch (TUB?) - Perhaps ~1-2 weeks of work? Most of the work here is in the backend, but could be split among myself, Erick and others. 2. Implement global function call optimization using TUBs - ~1 week of work, should be very easy 3. Implement global function inlining heuristics - 1-2 days of effort, inlining mechanisms are already implemented
- Maxime
On 2011-05-06, at 4:36 PM, chevalma@iro.umontreal.ca wrote:
The code patching mechanism I proposed is fairly simple to implement and will be directly useful for my research, which I need to get started soon. Let's not rule out implementing something like that in the near term.
I'm 100% sure we can implement something specialized just for global function calls and get performance gains with a tightly coupled approach, but that won't teach us all that much.
Among the important things it will teach us is the best performance we can gain with code patching. That's because, at the back-end level, code patching is unconstrained (in the same sense that the execution time of a compiled high-level language can be no better than the best assembly language coding). I can see a software engineering advantage to express the code patching at the IR level (if it is possible in a simple way), but I also see a performance advantage to do the code patching in the back-end.
Specifically, for calling global functions, the back-end code patching approach can be compared to the "toggleable unconditional branch" approach you suggest to see how the performance compares. That's a really important thing to determine.
Being able to not only optimize global calls, but easily inline global functions with little extra effort, on the other hand, now that's interesting.
In the project we need to better understand how code patching can be used to improve performance. Both approaches have pros and cons. We need to experiment to gain experience and ultimately make the best decision. That is a sound scientific and engineering approach.
What I propose is as follows:
- Implement the toggleable unconditional branch (TUB?)
- Perhaps ~1-2 weeks of work? Most of the work here is in the backend, but
could be split among myself, Erick and others. 2. Implement global function call optimization using TUBs
- ~1 week of work, should be very easy
- Implement global function inlining heuristics
- 1-2 days of effort, inlining mechanisms are already implemented
For your point 2, have you considered how TUB would be used to optimize calling global functions? I think you are overlooking things. Before starting an implementation of TUB, some design and back-of-the envelope analysis or prototyping is needed to know if TUB is appropriate to optimize calling global functions. Some questions that come to mind:
The IR needs a new instruction of the form:
change_TUB( the_tub, the_new_destination )
1) How can the TUB be identified? A first-class pointer to a label? A global name? How does the IR reference a TUB identifier?
2) How can the new destination of the TUB be identified? A first-class pointer to a label/function? A global name?
3) Who is the code patcher (in other words which part of the system is doing the code patching)? How is the relevant code patching information passed to the code patcher?
4) What is the IR for a global function call? It seems that IR pseudo-code would be something like:
the_tub_label: tub_branch unoptimized_label unoptimized_label: fn = getprop( globalobj, "f" ) call fn, nbargs=2, arg1, arg2
Where should the tub_branch be redirected if the global property "f" is known to be bound to a function whose code is at address F? You can't just do change_TUB(the_tub_label, F) because that would no longer do the parameter setup, the stack adjustment, the argument count setup, etc So it has to be redirected to:
optimized_label: call F, nbargs=2, arg1, arg2
One concern is the code bloat due to the duplication of calls. Also, there are two branches (the TUB and the call) in the optimized and unoptimized versions, so there will be a runtime performance loss due to the superfluous branch (branches are important to avoid in code because it slows down the CPU pipeline). With aggressive code patching there will be many TUBs in the code, so many superfluous branches. Also, how can we use TUBs to optimize calls in the case where the number of arguments of the call site matches the number of parameters of the callee (to avoid setup of the argument count parameter and the check of the argument count in the callee)?
5) What is the mechanism for keeping track of the TUBs that have been changed, so that they can be reverted to their original destination when the optimization is no longer valid? Typically a list of TUBs dependent on a certain property needs to be maintained. How is this allocated and maintained? What is the space usage?
As I said in my previous message, we will start with an implementation of the code patching for global function calls in the back-end, allowing us to greatly improve the performance at a low implementation cost. After milestone 2 we can look at code patching at the IR level.
Marc
The IR needs a new instruction of the form:
change_TUB( the_tub, the_new_destination )
I was thinking it could be a call to a backend function to which we pass a reference to the TUB instruction.
- How can the TUB be identified? A first-class pointer to a label? A
global name? How does the IR reference a TUB identifier?
Probably the linker should store the address of the TUB patch point and the corresponding code block reference on the TUB IR instruction.
- How can the new destination of the TUB be identified? A first-class
pointer to a label/function? A global name?
A reference to another basic block. The list of blocks the TUB can possibly go to should probably be determined in advance at IR generation time.
- Who is the code patcher (in other words which part of the system is
doing the code patching)? How is the relevant code patching information passed to the code patcher?
The backend, because it knows the details of the architecture best. Possibly the patched sequences could be pre-generated at IR generation time, and ready to be written at the right offset.
- What is the IR for a global function call? It seems that IR
pseudo-code would be something like:
the_tub_label: tub_branch unoptimized_label unoptimized_label: fn = getprop( globalobj, "f" ) call fn, nbargs=2, arg1, arg2
One basic block would have the generic getprop and call. The other would have the optimized static call or possibly the whole inlined function. The TUB would initially go to the optimized entry, but could go to the unoptimized one as well, and this would be indicated in the IR.
One concern is the code bloat due to the duplication of calls.
We should look at this more. You often talk about code size, including the size of the encodings of specific instructions, but we haven't done any experiments to measure the potential impact this could have.
Also, there are two branches (the TUB and the call) in the optimized and unoptimized versions
Possibly only one branch in the optimized case if the optimized entry point immediately follows.
how can we use TUBs to optimize calls in the case where the number of arguments of the call site matches the number of parameters of the callee (to avoid setup of the argument count parameter and the check of the argument count in the callee)?
We can put any code we want in the block for the optimized case.
As I said in my previous message, we will start with an implementation of the code patching for global function calls in the back-end, allowing us to greatly improve the performance at a low implementation cost. After milestone 2 we can look at code patching at the IR level.
The cost will be about the same for the more flexible system, and I'm pretty sure the implementation will go significantly faster if I help. I suggest we discuss this in person tomorrow.
- Maxime
On 2011-05-08, at 4:01 PM, chevalma@iro.umontreal.ca wrote:
I suggest we discuss this in person tomorrow.
I don't think that is the best use of our time. First of all we should include Erick in any discussion which touches on the back-end and his master's thesis. Second, we don't need more informal discussion on the subject. If you want to pursue the idea of IR code-patching you have to come up with arguments that are backed up with some analysis, evidence and measurements. For example, you could present this at one of the group meetings. I want to avoid using speculative arguments in the design of the system.
Marc