> On Nov 9, 2016, at 10:28 PM, Adam <adam.mlmb@gmail.com> wrote:
>
> From now and on, "processor" will mean "an OS thread that's running a GVM", right?
Sort of… A GVM (Gambit virtual machine) is actually a set of “processors” running a Gambit program. Typically a GVM “processor” is mapped to an OS thread. This choice of vocabulary is to abstract the implementation details and impress the idea that conceptually the VM is running on a set of processors, in parallel. In a “on the bare metal” implementation these “processors” would be actual hardware processors. But when running on top of a traditional OS, where it is not possible to access hardware processors directly, then each “processor” is implemented with an OS thread, and it is expected that the OS will be intelligent enough to assign different hardware processors to all these OS threads (with posix threads and Windows the thread affinity is used to help the OS achieve a one-to-one mapping).
> 2016-11-08 12:31 GMT+08:00 Marc Feeley <feeley@iro.umontreal.ca>:
> ..
> The barrier synchronizations are implemented using a binary tree like structure and time-limited spin-barriers to synchronize a parent processor with its 2 children. So a barrier takes logarithmic time.
>
> Cool!
>
> Just out of curiosity for nitty-gritty details:
>
> Which processor is the ultimate parent, is this generated at processors initialization, or at each GC? Is it the processor that trigs the GC?
When the Gambit process starts, the current thread is considered “processor 0”. After the Scheme library is initialized, the VM is resized to N processors (where N is supplied by the -:pN runtime option). So processor 0 is initially running the primordial Scheme thread. Note however that threads can migrate to another processor to balance the load.
All processors run the Scheme code in parallel and each processor has a heap section in which it does its memory allocations independently from the other processors (so allocation requires no locking, except when a heap section is full and a new heap section needs to be obtained from the pool of free heap sections, but that is relatively infrequent and there is very low contention for the lock). When the pool of free heap sections is exhausted, the processor that is doing the allocation will trigger a GC (so any processor can trigger a GC, and it is possible that more than one processor simultaneously trigger a GC).
At that point all processors are interrupted (by raising a flag that is polled regularly) to execute the GC in parallel. This is done using a barrier synchronization (so that the GC starts only after all processors have transitionned from the execution of the main program to the execution of the GC, to avoid that some processors start the GC while others are still allocating or modifying objects). Then, within the GC, barrier synchronizations are also performed to separate each phase of the GC (initialization, assignment of stack and heap sections, marking using strong references, …).
> What's the motivation for a tree-like propagation at all, compared for instance with that the processor that trigs GC would do the sync with all other processors, all by itself?
That would take linear time. Logarithmic is faster.
Also, it is necessary to have a synchronization mechanism that will tolerate simultaneous triggering of the GC and only do one GC regardless of how many processors triggered a GC. So using a predefined barrier synchronisation primitive is not sufficient. The mechanism implemented in the runtime system allows processors to request a synchronous service (such as garbage collection, or resizing the VM) and the mechanism will sort out which service “wins” (in the case where there are conflicting services requested).