I'm not necessarily suggesting we use the call and ret instructions. I simply believe that on an 8 register machine, we want to keep as many registers available as we can. If we reserve 3, 4 or even 5 of them, it seems to me like that would almost guarantee that spills occur in any non-trivial function, thereby defeating the goal of avoiding memory operations. Since the return address is only used once at the end of a function call, it seems like a logical candidate to spill.
On another note... So far, we've been discussing going caller-save, ~2 argument registers for function calls. Perhaps we ought to consider a different model, where we pass the return address as a hidden argument in a register, and as many call arguments in registers as we can manage... And let the callee spill the return address and arguments if necessary. The return address probably shouldn't be a reserved, never spilled register, and if we can get away with passing 4-6 arguments in registers (especially on x86-64), that could be effective.
- Maxime
On 2010-09-06, at 9:08 PM, chevalma@iro.umontreal.ca wrote:
My first suggestion is to put your notes on Google docs. Your notes seem pretty good so far.
Some notes:
Since the heap pointer is common to all code executing in a given thread, it makes some sense to keep it in a register, as opposed to duplicating it in multiple context objects. On the other hand, on x86-32, we have very few registers already, so it might make sense to have it in some kind of global context or thread-specific context. If we optimize code well enough, we could actually significantly decrease the use of allocations. So, perhaps a good initial option would be to keep it in a register because its easy, and later on, do some profiling to see how much allocations we do, and think about moving it in a global variable.
The context object will probably store quite a few objects from the low-level JavaScript library hierarchy. It might also contain performance counters.
What's not super clear to me is how context switching will be achieved. Tachyon compiler code needs its own global object, so do program instances, but some low-level library code should execute in the context of (using the global object of) the calling code.
I would not differentiate between implicit and explicit call arguments. They are essentially the same thing at a low-level. Our optimization system will take care of specializing functions and eliminating the unnecessary call arguments when possible.
The return address should probably be pushed on the stack. Ideally, short-running functions should be aggressively inlined, and the long-running functions that are left uninlined will only need to read the return address once.
I would keep the call protocol simple for now, so having everything caller-save makes sense.
- Maxime
The point of Erick's parameterization is to allow us to evaluate various sensible assignments of resources to see how well they perform (speed, space) on a number of benchmarks. So the goal at this point is not to choose the best assignment, but instead to make sure we cover all the reasonable assignments.
Concerning the passing of the return address the two reasonable options are in a register and on the stack. We should explore both, but let me quickly compare them to show that it is not clear which one is best.
Passing the return address on the stack can be done with the "call" x86 instruction which implicitly pushes the return address on the stack. This is compact and we can expect the processor to be designed to handle this efficiently. On the other hand, it does involve memory operations (one push and one pop), and in general it is good to avoid memory operations as much as possible. Moreover the call instruction combines two operations (i.e. "push return address" and "jump to function entry") and sometimes it is useful to do these operations separately, to do tail-call optimization and when the reordering of the basic blocks puts a recursive call immediately before the entry point of the function such as:
;; With a "call" instruction entry: if condition jump recursive_case A ret recursive_case: B call entry C ret
;; With a "jump" instruction and reordering of basic blocks recursive_case: B move ret_reg, stack_slot move $ret_point, ret_reg entry: if condition jump recursive_case A jump ret_reg .data descriptor ret_point: C jump stack_slot
In the second case, there is one fewer branch instructions in the recursive case. Branch instructions are usually expensive (note that branch prediction can alleviate this cost, but there's nothing like actually trying the code to know how well the processor handles this).
Note also that using a call instruction makes it harder to efficiently assign a descriptor to the return point (for garbage collection). It is not efficient to put the descriptor after the call instruction because it has to be "skipped" with an extra jump instruction, or by computing the proper return address in the called function. When using jump instructions the descriptor can be put just before the return point.
Another point is that using the stack may make the code easier to interface with other languages (C) which use the stack for the return address. This is not a very strong argument because to interface with other languages seamlessly we also need the parameters to be passed in the same location (where on the stack, which registers?), the execution context must be the same (frame pointer, no context register), and finally the representation of data must be the same.
Marc
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list