[gambit-list] parallel GC

Adam adam.mlmb at gmail.com
Thu Nov 10 21:31:03 EST 2016


Hi Marc,

Interesting! Thanks for taking the time to describe this.

Find here some more questions, only the first three have practical
importance though.


(Aha below by computer processor you mean CPU core.)


How is the parallell heap and GC model fit for different memory coherency
models e.g. that of AMD64 (strong) and ARM (weak)?

(I guess those are the two extremes and that other architectures like IBM
Power, Sparc, Mips, you name it, land between those.)


What is the execution model for Gambit threads, is the default mode that
execution is spread across all GVM processors?

If I will run code in Gambit that will be blocking, e.g. blocking system
calls such as DNS lookup on open-tcp-client, and C code, can I devote a
given number of GVM processors to that?

Would there be some way for me to run blocking C code and GVM processors in
the same OS thread conveniently, so that when I go into the C world I make
some kind of "stamp out" so that a GC not would block until that OS thread
would return to the Scheme world? How expensive will such a "stamp out" be?


2016-11-10 21:00 GMT+08:00 Marc Feeley <feeley at iro.umontreal.ca>:

>
> > On Nov 9, 2016, at 10:28 PM, Adam <adam.mlmb at gmail.com> wrote:
> >
> > From now and on, "processor" will mean "an OS thread that's running a
> GVM", right?
>
> Sort of…  A GVM (Gambit virtual machine) is actually a set of “processors”
> running a Gambit program.  Typically a GVM “processor” is mapped to an OS
> thread.  This choice of vocabulary is to abstract the implementation
> details and impress the idea that conceptually the VM is running on a set
> of processors, in parallel.  In a “on the bare metal” implementation these
> “processors” would be actual hardware processors.  But when running on top
> of a traditional OS, where it is not possible to access hardware processors
> directly, then each “processor” is implemented with an OS thread, and it is
> expected that the OS will be intelligent enough to assign different
> hardware processors to all these OS threads (with posix threads and Windows
> the thread affinity is used to help the OS achieve a one-to-one mapping).
>
> > 2016-11-08 12:31 GMT+08:00 Marc Feeley <feeley at iro.umontreal.ca>:
> > ..
> > The barrier synchronizations are implemented using a binary tree like
> structure and time-limited spin-barriers to synchronize a parent processor
> with its 2 children.  So a barrier takes logarithmic time.
> >
> > Cool!
> >
> > Just out of curiosity for nitty-gritty details:
> >
> > Which processor is the ultimate parent, is this generated at processors
> initialization, or at each GC? Is it the processor that trigs the GC?
>
> When the Gambit process starts, the current thread is considered
> “processor 0”.  After the Scheme library is initialized, the VM is resized
> to N processors (where N is supplied by the -:pN runtime option).  So
> processor 0 is initially running the primordial Scheme thread.  Note
> however that threads can migrate to another processor to balance the load.
>
> All processors run the Scheme code in parallel and each processor has a
> heap section in which it does its memory allocations independently from the
> other processors (so allocation requires no locking, except when a heap
> section is full and a new heap section needs to be obtained from the pool
> of free heap sections, but that is relatively infrequent and there is very
> low contention for the lock). When the pool of free heap sections is
> exhausted, the processor that is doing the allocation will trigger a GC (so
> any processor can trigger a GC, and it is possible that more than one
> processor simultaneously trigger a GC).


Just curious, where are the malloc():s done (to increase the total heap
space available for use for live objects)?

Also, is there relevance in changing memory block size to the page size
e.g. 4096 bytes from the previous 512 bytes, as to minimize the possibility
that two processors would write to memory addresses within the same page,
hence congesting the memory coherence logics on AMD64?

(I.e. the performance difference on AMD64 between core1 and core2 doing
writes to the same memory page concurrently, and doing writes to different
pages concurrently, is enormous. If I recall right this paper "What Every
Programmer Should Know About Memory" by Ulrich Drepper
https://www.akkadia.org/drepper/cpumemory.pdf showed some measurements with
like 10000x performance differences.)

Not sure if malloc() tends to be aligned to page limits, anyhow I guess
that would be a healthy assumption.


> At that point all processors are interrupted (by raising a flag that is
> polled regularly) to execute the GC in parallel.  This is done using a
> barrier synchronization (so that the GC starts only after all processors
> have transitionned from the execution of the main program to the execution
> of the GC, to avoid that some processors start the GC while others are
> still allocating or modifying objects).  Then, within the GC, barrier
> synchronizations are also performed to separate each phase of the GC
> (initialization, assignment of stack and heap sections, marking using
> strong references, …).
>
> > What's the motivation for a tree-like propagation at all, compared for
> instance with that the processor that trigs GC would do the sync with all
> other processors, all by itself?
>
> That would take linear time. Logarithmic is faster.


Wait. Doing a loop from 0 to N cores (which is generally below 100 or 1000
anyhow) to set a memory address, would be negligible speed on all
architectures.

for (i = 0; i < processors; i++) { processor[i]->going_into_gc = true; }

Is this propagation used not only for signalling that you're going into a
synchronous operation/GC, but also for complex operations like workload
within the marking process?


> Also, it is necessary to have a synchronization mechanism that will
> tolerate simultaneous triggering of the GC and only do one GC regardless of
> how many processors triggered a GC. So using a predefined barrier
> synchronisation primitive is not sufficient. The mechanism implemented in
> the runtime system allows processors to request a synchronous service (such
> as garbage collection, or resizing the VM) and the mechanism will sort out
> which service “wins” (in the case where there are conflicting services
> requested).
>

Is the "resizing the VM" about changing total heap size, or changing the
number of processors, or either?

Are more synchronous services coming up?

Ensuring that the GC only is triggered once, how do you do that - say the
GC trigging logic in one processor is  if (gc found to be needed) { workup;
go into gc; }, if that trigs in more processors at exactly the same time,
just very approximately how do you make it go into gc exactly once?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.iro.umontreal.ca/pipermail/gambit-list/attachments/20161111/1bd0c2e1/attachment.htm>


More information about the Gambit-list mailing list