Experiments after having changed the lookup algorithm on the global object to test for the 3 first properties before doing a linear search show a .5 second speedup on a 2.13 GHz Intel Core 2 Duo Macbook for fib(40).
Fast lookup: user 0m3.702s
Original Linear search: user 0m4.339s
Those tests can be repeated by running:
time make test
in the two following branches, available on the tachyon repository:
globalOriginalLinearSearch globalFastSearch
The source code for the search algorithm can be found in 'codegen/ir-to-asm-x86.js' at line:
338: irToAsm.translator.prototype.get_prop_addr = function (opnds, dest)
Erick
Afficher les réponses par date
Ok, I installed the modifications to d8. Unfortunately, I get a segfault on my AMD 64 bit machine running Ubuntu. I also get a segfault when trying to run any of the sh scripts under codegen, so the problem might be in the d8 extensions themselves. In any case, you should try to test your code on a linux 64 bit machine when you get the chance.
On another note, I can't run your unit test yet, but I get the gist that you're running fib(40) in there. I suggest instead making the unit test run fib(7) or fib(10), just to validate that it works, while keeping the unit tests running very fast. We should probably keep our actual performance tests separate. We might want to devise a benchmark suite system at some point.
- Maxime
Experiments after having changed the lookup algorithm on the global object to test for the 3 first properties before doing a linear search show a .5 second speedup on a 2.13 GHz Intel Core 2 Duo Macbook for fib(40).
Fast lookup: user 0m3.702s
Original Linear search: user 0m4.339s
Those tests can be repeated by running:
time make test
in the two following branches, available on the tachyon repository:
globalOriginalLinearSearch globalFastSearch
The source code for the search algorithm can be found in 'codegen/ir-to-asm-x86.js' at line:
338: irToAsm.translator.prototype.get_prop_addr = function (opnds, dest)
Erick
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Le 10-08-28 17:37 , chevalma@iro.umontreal.ca a écrit :
Ok, I installed the modifications to d8. Unfortunately, I get a segfault on my AMD 64 bit machine running Ubuntu. I also get a segfault when trying to run any of the sh scripts under codegen, so the problem might be in the d8 extensions themselves. In any case, you should try to test your code on a linux 64 bit machine when you get the chance.
I don't have access to a linux 64 bit machine at home. I might try again on a lab machine if the setup is not too painful.
On another note, I can't run your unit test yet, but I get the gist that you're running fib(40) in there. I suggest instead making the unit test run fib(7) or fib(10), just to validate that it works, while keeping the unit tests running very fast. We should probably keep our actual performance tests separate. We might want to devise a benchmark suite system at some point.
On the master branch, I use fib(20) as part of the regular tests to make it fast enough so it is unnoticeable. I only use fib(40) in the two aforementioned branches to mesure the speed enhancement.
Erick
- Maxime
Experiments after having changed the lookup algorithm on the global object to test for the 3 first properties before doing a linear search show a .5 second speedup on a 2.13 GHz Intel Core 2 Duo Macbook for fib(40).
Fast lookup: user 0m3.702s
Original Linear search: user 0m4.339s
Those tests can be repeated by running:
time make test
in the two following branches, available on the tachyon repository:
globalOriginalLinearSearch globalFastSearch
The source code for the search algorithm can be found in 'codegen/ir-to-asm-x86.js' at line:
338: irToAsm.translator.prototype.get_prop_addr = function (opnds, dest)
Erick
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
On 2010-08-28, at 5:37 PM, chevalma@iro.umontreal.ca wrote:
Ok, I installed the modifications to d8. Unfortunately, I get a segfault on my AMD 64 bit machine running Ubuntu. I also get a segfault when trying to run any of the sh scripts under codegen, so the problem might be in the d8 extensions themselves. In any case, you should try to test your code on a linux 64 bit machine when you get the chance.
I haven't looked at the generated code, but I'm almost sure that it is a question of word size. When values are written to the stack they will take 8 bytes each instead of 4, so the stack offsets must be modified accordingly. Where in the system is there a selection of the target architecture for the code generation?
Marc
Neat! Can you measure what you would get with a direct branch (no lookup) to see how much was saved by the faster lookup.
Marc
On 2010-08-28, at 12:22 PM, Erick Lavoie wrote:
Experiments after having changed the lookup algorithm on the global object to test for the 3 first properties before doing a linear search show a .5 second speedup on a 2.13 GHz Intel Core 2 Duo Macbook for fib(40).
Fast lookup: user 0m3.702s
Original Linear search: user 0m4.339s
Those tests can be repeated by running:
time make test
in the two following branches, available on the tachyon repository:
globalOriginalLinearSearch globalFastSearch
The source code for the search algorithm can be found in 'codegen/ir-to-asm-x86.js' at line:
338: irToAsm.translator.prototype.get_prop_addr = function (opnds, dest)
Erick
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
I modified the implementation of Fibonacci I showed you at last meeting to do the lookup for the function on the global object only the first time the function is called. I patch the relative offset to the function at the call site (no global object) once the function address is known using a technique similar to what Marc described in a previous mail.
Although the output is correct, I can't make sense of the performance results I get. Replacing only one of the two recursive call (either one) by the "optimized" one I get a .4 second speedup but replacing both incurs a slowdown of .1 second! Tests were made on my laptop (Macbook) for fib(40).
The code is in another branch: 'directFuncCall'.
To run it, simply call:
./fib-ir.sh
from the codegen directory.
A direct link to the gitweb source file is:
http://www.iro.umontreal.ca/~tachyon/gitweb/gitweb.cgi?p=tachyon.git;a=blob;... http://www.iro.umontreal.ca/%7Etachyon/gitweb/gitweb.cgi?p=tachyon.git;a=blob;f=source/codegen/test-x86-fibonacci-ir-translation.js;h=6ffacf77eaa04011780031b7ed0477703daa7caa;hb=bcdcae509bf05fac5e7d7ee7b2da214cb7d216ca
Your ideas are welcomed!
Erick
Le 10-08-29 13:57 , Marc Feeley a écrit :
Neat! Can you measure what you would get with a direct branch (no lookup) to see how much was saved by the faster lookup.
Marc
On 2010-08-28, at 12:22 PM, Erick Lavoie wrote:
Experiments after having changed the lookup algorithm on the global object to test for the 3 first properties before doing a linear search show a .5 second speedup on a 2.13 GHz Intel Core 2 Duo Macbook for fib(40).
Fast lookup: user 0m3.702s
Original Linear search: user 0m4.339s
Those tests can be repeated by running:
time make test
in the two following branches, available on the tachyon repository:
globalOriginalLinearSearch globalFastSearch
The source code for the search algorithm can be found in 'codegen/ir-to-asm-x86.js' at line:
338: irToAsm.translator.prototype.get_prop_addr = function (opnds, dest)
Erick
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Don't know if this is the case here, but in general, instruction caches can have rather unpredictable effects on performance. Small changes that should improve performance can result in more instruction cache collisions, and lower performance. Adding extra junk instructions can faster in slower code, and removing some can result in slower code.
You could test this hypothesis by trying your code on a machine with a different processor (one that has a different instruction cache configuration). You might find you get different results.
⁻ Maxime
I modified the implementation of Fibonacci I showed you at last meeting to do the lookup for the function on the global object only the first time the function is called. I patch the relative offset to the function at the call site (no global object) once the function address is known using a technique similar to what Marc described in a previous mail.
Although the output is correct, I can't make sense of the performance results I get. Replacing only one of the two recursive call (either one) by the "optimized" one I get a .4 second speedup but replacing both incurs a slowdown of .1 second! Tests were made on my laptop (Macbook) for fib(40).
The code is in another branch: 'directFuncCall'.
To run it, simply call:
./fib-ir.sh
from the codegen directory.
A direct link to the gitweb source file is:
http://www.iro.umontreal.ca/~tachyon/gitweb/gitweb.cgi?p=tachyon.git;a=blob;... http://www.iro.umontreal.ca/%7Etachyon/gitweb/gitweb.cgi?p=tachyon.git;a=blob;f=source/codegen/test-x86-fibonacci-ir-translation.js;h=6ffacf77eaa04011780031b7ed0477703daa7caa;hb=bcdcae509bf05fac5e7d7ee7b2da214cb7d216ca
Your ideas are welcomed!
Erick
Le 10-08-29 13:57 , Marc Feeley a écrit :
Neat! Can you measure what you would get with a direct branch (no lookup) to see how much was saved by the faster lookup.
Marc
On 2010-08-28, at 12:22 PM, Erick Lavoie wrote:
Experiments after having changed the lookup algorithm on the global object to test for the 3 first properties before doing a linear search show a .5 second speedup on a 2.13 GHz Intel Core 2 Duo Macbook for fib(40).
Fast lookup: user 0m3.702s
Original Linear search: user 0m4.339s
Those tests can be repeated by running:
time make test
in the two following branches, available on the tachyon repository:
globalOriginalLinearSearch globalFastSearch
The source code for the search algorithm can be found in 'codegen/ir-to-asm-x86.js' at line:
338: irToAsm.translator.prototype.get_prop_addr = function (opnds, dest)
Erick
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
As you know, I'm currently working on implementing a lower level IR subset and handler implementation so as to simplify the work the backend has to do in terms of code generation. One issue that caught my attention today is that the IR has both a call and a construct instruction. The construct instruction (used for constructor calls, with the new operator) implicitly creates a new object and binds it to the this value.
This is probably not the kind of work we want to have the backend do, because it won't know how to create a JS object on its own, so I thought that it would make sense to translate constructor calls into regular function calls, where I first create an object to use as the this value before the call, and check whether or not the function returned an object after the call, and if so, use that as the newly created object, instead of the one I created before the call.
One issue, however, is that while this should work for JavaScript code, as far as I understand the ECMA spec (feel free to double check), we may still want to know which calls really are object constructions, because if we end up calling native C/C++ code (eg: browser DOM), it may expect us to tell it what is a constructor call, as opposed to a regular function call.
The current solutions I have in mind are either:
1) Keep the construct instruction separate from call, but have it take an explicit this pointer, just like call.
2) Get rid of the construct instruction, and instead have a flag (eg: isConstruct) on the call instruction.
I'm personally more in favor of the first one, because it avoids special-casing the call instruction in various places. Which one would you guys vote for?
- Maxime
On 2010-08-31, at 1:09 , Maxime Chevalier-Boisvert wrote:
[...]
One issue, however, is that while this should work for JavaScript code, as far as I understand the ECMA spec (feel free to double check), we may still want to know which calls really are object constructions, because if we end up calling native C/C++ code (eg: browser DOM), it may expect us to tell it what is a constructor call, as opposed to a regular function call.
I'm not sure I understand your foreign code argument, but the principle of differentiating between the two types of call sounds useful to me even for the purpose of our optimizations down the road. The approach you're proposing sounds good to me.
The current solutions I have in mind are either:
- Keep the construct instruction separate from call, but have it take
an explicit this pointer, just like call.
- Get rid of the construct instruction, and instead have a flag (eg:
isConstruct) on the call instruction.
I'm personally more in favor of the first one, because it avoids special-casing the call instruction in various places. Which one would you guys vote for?
#1 also gets my vote, and mostly for the same reason.
Bruno
I'm not sure I understand your foreign code argument, but the
principle of differentiating between the two types of call sounds useful to me even for the purpose of our optimizations down the road. The approach you're proposing sounds good to me.
I'm mostly thinking of what ends up happening with the C++ code at the very end of the road:
DOMObjCode.call(...);
vs
DOMObjCode.construct(...)
I'm expecting a naive, webkit-like implementation of JavaScript to have different ways of signaling each kind of call.
- Maxime