In order to estimate the time it takes to load an image in various ways I did the following test on my MacBook pro.
I wrote a dummy assembler file with 5 MB worth of code (which is about the size of Tachyon's machine code right now). It contains lines like:
.text .globl _codestart .globl _codeend _codestart: movq $123,%rax .byte 0xc3,0xc3,0xc3,0xc3,0xc3,0xc3,0xc3,0xc3 .byte 0xc3,0xc3,0xc3,0xc3,0xc3,0xc3,0xc3,0xc3 .byte 0xc3,0xc3,0xc3,0xc3,0xc3,0xc3,0xc3,0xc3
... ;; about 700,000 lines of the same here
_codeend:
This file takes 2.6 seconds to assemble using the GNU assembler, which is reasonable.
I linked the resulting object file with a main program in 2 variants:
- main just calls codestart and terminates (direct embedded code) - main allocates a 5 MB machine code block, and copies the code from codestart to codeend to the block (copied embedded code)
Finally, I also have a minimal C program which allocates a 5 MB machine code block, and reads a 5 MB binary file to the block using the Unix open and read functions (file code).
Here are the running times in seconds (after a few dry runs):
direct embedded code 0.003 copied embedded code 0.016 file code 0.009
The run time advantage of the first method is pretty clear. The startup is 3 to 5 times faster than the other methods.
So lets aim to implement that. As I said, the startup time is really important for some applications (remember what Andreas said about Firefox... one of the reasons it is not implemented more in JS is that the startup time is currently too large).
Marc
Afficher les réponses par date
Here are the running times in seconds (after a few dry runs):
direct embedded code 0.003 copied embedded code 0.016 file code 0.009
The run time advantage of the first method is pretty clear. The startup is 3 to 5 times faster than the other methods.
So lets aim to implement that. As I said, the startup time is really important for some applications
I thought we'd already agreed. The second approach is more compatible with our system, for reasons I've mentioned in the meeting.
I have to question how < 20ms could be too long of a load time for our applications (that's less than 1/50th of a second)... And I have to question how reading a whole other binary file and reading its contents could be faster than doing a memcpy of something that's already in memory.
- Maxime
On 2011-04-06, at 8:17 PM, chevalma@iro.umontreal.ca wrote:
Here are the running times in seconds (after a few dry runs):
direct embedded code 0.003 copied embedded code 0.016 file code 0.009
The run time advantage of the first method is pretty clear. The startup is 3 to 5 times faster than the other methods.
So lets aim to implement that. As I said, the startup time is really important for some applications
I thought we'd already agreed. The second approach is more compatible with our system, for reasons I've mentioned in the meeting.
Please explain what you mean by "more compatible with our system". It is not clear to me what you mean. Also, I don't recall that we agreed on anything concerning the design of the loader. I originally agreed with your argument that it wouldn't be possible to patch the executable code, but I later explained that this would not be a problem because we can use the OS assembler to patch the addresses when the file is loaded.
I have to question how < 20ms could be too long of a load time for our applications (that's less than 1/50th of a second)...
One applications where startup time is important is when you have a web server processing requests by starting up a new process running the desired program (which is a realistic situation for our "server-side" use of Tachyon). If it takes 1/50th of a second to start the program, then you can only process 50 requests per second (per processor). That's lame. Being 5 times faster matters. Shell scripts which invoke lots of small programs (for example when the operating system is booting up) is another situation.
And I have to question how reading a whole other binary file and reading its contents could be faster than doing a memcpy of something that's already in memory.
I'm not sure what it is you "question" in my results. These are measurements on a real machine! It seems reasonable to me... In the memcpy case there are two copies being done (one from disk to memory when loading the program, and one from memory to memory when doing the memcpy). When reading from a file, there is only one copy (from disk to memory). Also in the "direct embedded code" approach, not all the code is run, so the operating system may just map it in the virtual address space and not even load it into RAM. That's one more advantage of using the OS loader where possible.
Marc
Please explain what you mean by "more compatible with our system". It is not clear to me what you mean. Also, I don't recall that we agreed on anything concerning the design of the loader. I originally agreed with your argument that it wouldn't be possible to patch the executable code, but I later explained that this would not be a problem because we can use the OS assembler to patch the addresses when the file is loaded.
And I explained that the OS linker won't be able to call hash consing functions, and that we may need to re-patch the machine code during execution (e.g.: after garbage collection).
The reason why the approach I propose is "more compatible with our system" is that it fits with the way code is compiled and linked at run-time. Storing code in the .text segment is what is traditionally done for statically compiled programs. It's outside of our control and requires special handling, since this code is potentially not movable, not freeable, not rewritable.
One applications where startup time is important is when you have a web server processing requests by starting up a new process running the desired program (which is a realistic situation for our "server-side" use of Tachyon).
People haven't been doing that since the CGI scripting day. Every modern server-side scripting system (RoR, Django, PHP, etc.) ships with a module that runs persistently in the web server.
I would appreciate it if you trusted me on this issue. I intend for the implementation to be simple and efficient while fitting with the current design. If somehow, someday, we find that our loading time is hugely burdened and that it's worth complicating our implementation to try and scrape off 15ms, it can be changed down the road.
In the meantime, initializing Tachyon after the machine code is loaded and linked is going to take at least an order of magnitude more time than the loading/linking itself... And this will still make for a very reasonable total loading time.
- Maxime
On 2011-04-06, at 10:06 PM, chevalma@iro.umontreal.ca wrote:
Please explain what you mean by "more compatible with our system". It is not clear to me what you mean. Also, I don't recall that we agreed on anything concerning the design of the loader. I originally agreed with your argument that it wouldn't be possible to patch the executable code, but I later explained that this would not be a problem because we can use the OS assembler to patch the addresses when the file is loaded.
And I explained that the OS linker won't be able to call hash consing functions,
The linker shouldn't have to call hash consing functions at run time. When the image is dumped, a table of all the strings can be constructed (using hash-consing, but at "image dump time"). That hash-consed string table can then be dumped to the image. That way there will be no need to reconstruct the table when the image starts running. This would be a waste of execution time.
Just to be clear, I am trying to capitalize on opportunities to do some of the work once and for all when the image is dumped. Including:
- not having to copy the code to a new zone (by using executable binary format) - not having to patch the code (by using the assembler and linker) - not having to reconstruct the initial string table (by computing it at image dump time)
Wouldn't it be nice if Tachyon was already doing useful stuff (such as executing/compiling/optimizing the user's JS code) in as little as 3 to 5 milliseconds after startup? That is definitely possible with the directly executable image format I am proposing.
and that we may need to re-patch the machine code during execution (e.g.: after garbage collection).
We can arrange things so that re-patching the machine code is not needed for garbage-collection. We can distinguish "permanent objects" which are allocated forever and whose address does not change, from non-permanent objects (which are allocated during the execution of the program and can be reclaimed). A tag in an object's header, or address ranges can be used to mark objects as permanent. The code in an image would be a set of permanent objects. If permanent objects only point to permanent objects, the garbage-collector can also avoid to scan them to find the live objects. This can save a lot of work for the garbage-collector (it does not have to scan the code in the image, which for some applications can be most of the "data").
Re-patching will be needed for inline-caching and related techniques. But that's not a problem because we can always use the mprotect system call to mark those pages with permission read-write-execute. At least there won't be two copies of the code (one in the image and one in the heap) and no copying/patching of the code and construction of the string table at run time.
The reason why the approach I propose is "more compatible with our system" is that it fits with the way code is compiled and linked at run-time. Storing code in the .text segment is what is traditionally done for statically compiled programs. It's outside of our control and requires special handling, since this code is potentially not movable, not freeable, not rewritable.
One applications where startup time is important is when you have a web server processing requests by starting up a new process running the desired program (which is a realistic situation for our "server-side" use of Tachyon).
People haven't been doing that since the CGI scripting day. Every modern server-side scripting system (RoR, Django, PHP, etc.) ships with a module that runs persistently in the web server.
And that technique has a bunch of problems that come with it... including security issues, uncontrolled space consumption (leaks from one execution to the next), and load balancing. The reason such modules are needed is to circumvent the slow startup times of some language implementations. That's what I want to avoid with a clean efficient design!
I would appreciate it if you trusted me on this issue.
It is not a matter of trust. I am putting forward some technical arguments and measurements which show the advantages of the directly executable image format. Do you dispute my technical arguments?
I intend for the implementation to be simple and efficient while fitting with the current design. If somehow, someday, we find that our loading time is hugely burdened and that it's worth complicating our implementation to try and scrape off 15ms, it can be changed down the road.
Some days you make the argument that "we can change it down the road", and other days "lets do it right the first time so that we don't have to do it again". Please make up your mind. You can't have it both ways.
In the garbage-collector I want to be able to explore the benefit of having permanent objects to reduce the garbage-collection time. Also, copying the code from the image to a new zone doubles the virtual memory footprint of the code. I want to avoid this extra burden on the memory system which will decrease the effectiveness of the garbage-collector. Moreover, I am also concerned about mobile platforms which have much less memory and CPU speed (the loading time will be roughly 10 times longer on ARM so a 50 ms load time becomes a 0.5 second load time, which is noticeable and annoying to the user).
In the meantime, initializing Tachyon after the machine code is loaded and linked is going to take at least an order of magnitude more time than the loading/linking itself... And this will still make for a very reasonable total loading time.
Don't justify an inefficiency on the basis of the existence of another bigger inefficiency! This kind of reasoning quickly turns into an acceptance of a system's sluggishness. It takes only one inefficient subsystem to ruin the overall performance.
Initializing Tachyon should not take much time. If it does we need to profile the code and figure out why. Currently we haven't done any of that and the quality of the generated machine code is poor. Once that is fixed, the loading time will be dominant.
Marc
This is incompatible with the current design.
The code segment is read-only. This means no custom linking, no freeing of the code, and no possibility of doing code patching or inline caching in this code. It prevents dynamic modification of the code. This is not just incompatible with the current design, it goes against our performance and research goals.
The strings and the hash consing table are allocated in the heap. To allocate them in a frozen memory location means special handling for strings allocated prior to initialization. Special cases incur a run-time penalty as well. These strings and the hash consing table can no longer be regular JavaScript objects. The same goes for machine code blocks. The statically allocated ones would need many special cases of their own.
It's obvious to me that an initialization time penalty of less than 20ms is perfectly reasonable if it means the system can remain simple and dynamic.
There is no requirement for Tachyon to be able to startup faster than everybody else.
Remember that we aren't trying to build a commercial product, and that even if we were, we could still be competitive with my approach:
maxime@maxime-desktop:~$ time d8 -e "" real 0m0.040s user 0m0.032s sys 0m0.008s
Premature optimization is not warranted. Especially when it will only complicate and, well, worsen our design.
Tachyon is a dynamic program, and I believe it would be a mistake to try to compile it in the same way GCC compiles code. This would place many limiting restrictions on our design which will only hurt us down the road.
- Maxime
On 11-04-07 03:02 PM, Marc Feeley wrote:
On 2011-04-06, at 10:06 PM, chevalma@iro.umontreal.ca wrote:
Please explain what you mean by "more compatible with our system". It is not clear to me what you mean. Also, I don't recall that we agreed on anything concerning the design of the loader. I originally agreed with your argument that it wouldn't be possible to patch the executable code, but I later explained that this would not be a problem because we can use the OS assembler to patch the addresses when the file is loaded.
And I explained that the OS linker won't be able to call hash consing functions,
The linker shouldn't have to call hash consing functions at run time. When the image is dumped, a table of all the strings can be constructed (using hash-consing, but at "image dump time"). That hash-consed string table can then be dumped to the image. That way there will be no need to reconstruct the table when the image starts running. This would be a waste of execution time.
Just to be clear, I am trying to capitalize on opportunities to do some of the work once and for all when the image is dumped. Including:
- not having to copy the code to a new zone (by using executable binary format)
- not having to patch the code (by using the assembler and linker)
- not having to reconstruct the initial string table (by computing it at image dump time)
Wouldn't it be nice if Tachyon was already doing useful stuff (such as executing/compiling/optimizing the user's JS code) in as little as 3 to 5 milliseconds after startup? That is definitely possible with the directly executable image format I am proposing.
and that we may need to re-patch the machine code during execution (e.g.: after garbage collection).
We can arrange things so that re-patching the machine code is not needed for garbage-collection. We can distinguish "permanent objects" which are allocated forever and whose address does not change, from non-permanent objects (which are allocated during the execution of the program and can be reclaimed). A tag in an object's header, or address ranges can be used to mark objects as permanent. The code in an image would be a set of permanent objects. If permanent objects only point to permanent objects, the garbage-collector can also avoid to scan them to find the live objects. This can save a lot of work for the garbage-collector (it does not have to scan the code in the image, which for some applications can be most of the "data").
Re-patching will be needed for inline-caching and related techniques. But that's not a problem because we can always use the mprotect system call to mark those pages with permission read-write-execute. At least there won't be two copies of the code (one in the image and one in the heap) and no copying/patching of the code and construction of the string table at run time.
The reason why the approach I propose is "more compatible with our system" is that it fits with the way code is compiled and linked at run-time. Storing code in the .text segment is what is traditionally done for statically compiled programs. It's outside of our control and requires special handling, since this code is potentially not movable, not freeable, not rewritable.
One applications where startup time is important is when you have a web server processing requests by starting up a new process running the desired program (which is a realistic situation for our "server-side" use of Tachyon).
People haven't been doing that since the CGI scripting day. Every modern server-side scripting system (RoR, Django, PHP, etc.) ships with a module that runs persistently in the web server.
And that technique has a bunch of problems that come with it... including security issues, uncontrolled space consumption (leaks from one execution to the next), and load balancing. The reason such modules are needed is to circumvent the slow startup times of some language implementations. That's what I want to avoid with a clean efficient design!
I would appreciate it if you trusted me on this issue.
It is not a matter of trust. I am putting forward some technical arguments and measurements which show the advantages of the directly executable image format. Do you dispute my technical arguments?
I intend for the implementation to be simple and efficient while fitting with the current design. If somehow, someday, we find that our loading time is hugely burdened and that it's worth complicating our implementation to try and scrape off 15ms, it can be changed down the road.
Some days you make the argument that "we can change it down the road", and other days "lets do it right the first time so that we don't have to do it again". Please make up your mind. You can't have it both ways.
In the garbage-collector I want to be able to explore the benefit of having permanent objects to reduce the garbage-collection time. Also, copying the code from the image to a new zone doubles the virtual memory footprint of the code. I want to avoid this extra burden on the memory system which will decrease the effectiveness of the garbage-collector. Moreover, I am also concerned about mobile platforms which have much less memory and CPU speed (the loading time will be roughly 10 times longer on ARM so a 50 ms load time becomes a 0.5 second load time, which is noticeable and annoying to the user).
In the meantime, initializing Tachyon after the machine code is loaded and linked is going to take at least an order of magnitude more time than the loading/linking itself... And this will still make for a very reasonable total loading time.
Don't justify an inefficiency on the basis of the existence of another bigger inefficiency! This kind of reasoning quickly turns into an acceptance of a system's sluggishness. It takes only one inefficient subsystem to ruin the overall performance.
Initializing Tachyon should not take much time. If it does we need to profile the code and figure out why. Currently we haven't done any of that and the quality of the generated machine code is poor. Once that is fixed, the loading time will be dominant.
Marc
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
As I said, the code segment can be made read-write-execute with a
call to mprotect.
On which operating systems can you guarantee that this is feasible?
Maxime... just do it, will you!
I proposed a simple solution that is probably quite fast and could easily be working within a week or two. I'm not going to radically change the design in a way I believe is incompatible with our research goals so we can gain a microscopic short-term performance advantage that isn't really required in the first place.
If you're going to continuously ignore my arguments and impose constraints on the project that might make the goal of my Ph.D. thesis (a self-optimizing compiler) impossible or way more difficult and tedious than it needs to be, then I'd rather quit the project for good, and I'm being serious.
It was the view of my M.Sc. supervisor, Prof. Laurie Hendren, that a Ph.D. thesis involved coming up with something creative and new. I can't do this if you're always trying to impose rigid design decisions on me which I believe are fundamentally opposed to what I'm trying to achieve.
I need some amount of creative freedom. If I can't have it here then I'll have to try and find it somewhere else. This makes me really sad because I think I'd be missing on a great opportunity, but I'm sick of the conflicts and the fighting and all the unneeded stress it causes me.
- Maxime
On 2011-04-07, at 3:44 PM, Maxime Chevalier-Boisvert wrote:
As I said, the code segment can be made read-write-execute with a
call to mprotect.
On which operating systems can you guarantee that this is feasible?
mprotect is in the same family of functions as mmap, which Tachyon currently uses to allocate machine code blocks. So it will very likely work on all the operating systems that we care about. Note that the equivalent function on WIN32 is called VirtualProtect. These facilities have existed for over 10 years in various operating systems.
I wrote the program attached below to verify that this technique works on the operating systems we care about most (GNU/Linux, Mac OS X, and Windows).
Marc
/* * File: "self-modif-code.c" * * Tests operating system support for self modifying code. * * Compile/execute: * * % gcc self-modif-code.c * % ./a.out * code_size = 16 * f(10) = 100 * 55 48 89 e5 89 7d fc 8b 45 fc 0f af 45 fc c9 c3 * c3 48 89 e5 89 7d fc 8b 45 fc 0f af 45 fc c9 c3 * f(10) = 10 * f(10) = 100 * * Correct operation has been verified on these operating systems: * * - GNU/Linux 2.6.27.25-78.2.56.fc9.i686d * - MacOS X 10.6.7 * - Windows XP and Windows 7 * */
#include <stdio.h> #include <stdlib.h> #include <stddef.h> #include <string.h>
typedef unsigned char u8;
int f(int x) { return x*x; } int f_end(int x) { return 0; }
#define MAX_CODE_SIZE 100
u8 *code = (u8*)f; u8 *code_end = (u8*)f_end; u8 code_copy[MAX_CODE_SIZE]; int code_size;
#if defined(linux) || defined(__linux) || defined(__linux__) #define USE_MPROTECT #endif
#if defined(__MACOSX__) || (defined(__APPLE__) && defined(__MACH__)) #define USE_MPROTECT #endif
#if defined(WIN32) || defined(_WIN32) #define USE_VIRTUALPROTECT #endif
#ifdef USE_MPROTECT #include <sys/mman.h> #endif
#ifdef USE_VIRTUALPROTECT #include <windows.h> #endif
void make_code_writable() { int page_size = 4096; ptrdiff_t a = ~(page_size-1) & (ptrdiff_t)code; ptrdiff_t b = ~(page_size-1) & (page_size-1+(ptrdiff_t)code_end);
#ifdef USE_MPROTECT
/* mprotect exists since 4.4 BSD circa 1994 */
mprotect((u8*)a, b-a, PROT_READ | PROT_WRITE | PROT_EXEC);
#endif
#ifdef USE_VIRTUALPROTECT
/* VirtualProtect exists since Windows 2000 circa 1999 */
DWORD old; VirtualProtect((u8*)a, b-a, PAGE_EXECUTE_READWRITE, &old);
#endif }
void print_code() { int i; for (i=0; i<25 && i<code_size; i++) printf("%02x ",code[i]); printf("\n"); }
int main() { code_size = code_end - code;
printf("code_size = %d\n", code_size);
if (code_size > MAX_CODE_SIZE) exit(1);
memcpy(code_copy, code, code_size);
printf("f(10) = %d\n", f(10));
print_code();
make_code_writable();
code[0] = 0xc3; /* x86 ret instruction */
print_code();
printf("f(10) = %d\n", f(10));
memcpy(code, code_copy, code_size);
printf("f(10) = %d\n", f(10));
return 0; }
On 2011-04-11, at 5:15 PM, Marc Feeley wrote:
On 2011-04-07, at 3:44 PM, Maxime Chevalier-Boisvert wrote:
As I said, the code segment can be made read-write-execute with a
call to mprotect.
On which operating systems can you guarantee that this is feasible?
mprotect is in the same family of functions as mmap, which Tachyon currently uses to allocate machine code blocks. So it will very likely work on all the operating systems that we care about. Note that the equivalent function on WIN32 is called VirtualProtect. These facilities have existed for over 10 years in various operating systems.
I wrote the program attached below to verify that this technique works on the operating systems we care about most (GNU/Linux, Mac OS X, and Windows).
Marc
Following Maxime's comments today, I have tested the loading time with and without the execution of the code (0% means the code is loaded but not executed, and 100% means that in addition to loading the code, each instruction in all of the code is executed once).
I have tested on Linux (baro) and my MacBook Pro, with images from 1MB to 128MB, using a local disk. The assembly time in seconds is measured and the time in milliseconds to load and execute the code. The "no copy" approach directly calls the code in the "text" area. The "with copy" approach allocates a machine code block, copies the code from the "text" area to the machine code block (using memcpy), and then calls the machine code block (the copied code would have to be patched with the correct absolute addresses, but that is ignored for now, which means the time for the "with copy" case is optimistic).
Here are the results:
| Linux (baro) | "load+execution" time (ms) | image asm 0% executed 100% executed | size time no with no with | (s) copy copy copy copy | 1MB 0.257 1 2 1 2 | 2MB 0.447 1 3 1 3 | 4MB 0.841 1 5 3 6 | 8MB 1.602 1 10 5 11 | 16MB 3.054 1 18 9 22 | 32MB 5.796 1 33 17 31 | 64MB 11.154 1 57 25 63 | 128MB 22.014 1 109 55 125 | | MacOS X (macro) | "load+execution" time (ms) | image asm 0% executed 100% executed | size time no with no with | (s) copy copy copy copy | 1MB 0.849 3 5 3 5 | 2MB 0.865 3 7 6 9 | 4MB 1.847 3 14 8 21 | 8MB 3.483 4 20 12 23 | 16MB 7.050 5 39 21 46 | 32MB 14.127 7 77 38 84 | 64MB 27.779 15 151 74 168 | 128MB 58.125 35 300 149 328
The "no copy" approach is always a win, whether the code is executed or not.
When all the code is executed (which is very unlikely in practice) the "no copy" approach is about twice as fast as "with copy". Obviously this becomes significant with large images. Interestingly, on Linux, the "no copy" approach has a constant cost for the loading (1 millisecond) regardless of the size of the image, and the cost goes up proportionately to how much of the image is actually executed (more precisely the number of pages of code that are touched by the execution). On MacOS X, when 0% is executed the "no copy" approach is up to 11 times faster than the "with copy" approach, and when 10% is executed "no copy" is consistently 2 times faster.
Note that the Firefox application on my Mac is 69MB. So the 64MB case in my tables is probably representative of a full browser implemented in JavaScript.
I've attached the files I have used on Linux.
Marc
P.S. In this test I have used the assembler to generate data in the "text" section. The generated image, in assembler, looks like this:
.text .globl _code_start .globl _code_end .align 12 _code_start: .byte 144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,184 .long _code_start .byte 144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,184 .long _code_start .byte 144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,184 .long _code_start .byte 144,144,144,144,144,144,144,144,144,144,144,144,144,144,144,184 .long _code_start .byte 144,144,144,144,144,144,144,144,144,144,144,144,144,144,144 .byte 195 .align 12 _code_end:
144 is the encoding of "nop" 184 is the first byte of the encoding of "movl $_code_start,%eax" 195 is the encoding of "ret"
I have put references to _code_start inside the code just to show how absolute addresses can be put in the code such that the operating system linker/loader will fill in those parts with the correct absolute address.
(remember what Andreas said about Firefox... one of the reasons it is not implemented more in JS is that the startup time is currently too large).
I also have to add, the reason why Firefox has an issue with the load time w.r.t. JS code is most likely that they *do not* precompile the said JS code.
- Maxime
On 2011-04-06, at 8:20 PM, chevalma@iro.umontreal.ca wrote:
(remember what Andreas said about Firefox... one of the reasons it is not implemented more in JS is that the startup time is currently too large).
I also have to add, the reason why Firefox has an issue with the load time w.r.t. JS code is most likely that they *do not* precompile the said JS code.
Our compiler will be compiled, and so is theirs. The hope is to be able to compile applications to an executable, but there are many hurdles to pass to make this practical in real applications like a browser made of many modules possibly loaded on demand. So I think that the first use-case for Tachyon in real applications will be very close to what Firefox does currently.
Marc