Hi Marc,
Below some questions about sequential consistency and critical sections in SMP Gambit.
The Sutter slides below illustrate the problematique addressed beautifully.
* Please have a look at the below and confirm that this is what Gambit does?
*
In thread-local code does Gambit internally do any optimizations that
alters the execution order? In C-compiled code I presume the C compiler
can make such optimizations, and that in itself pulls Gambit to need to address this problem domain.
* The
baseline constraint on Gambit's as well as any C compiler's
optimizations are that optimizations that alter order will not span
module-external calls, isn't it?
* Does Gambit
offer any low level primitives for (critical sections for) doing low
level in-module sequentially consistent operations between cores?
I.e. anything lower level than thread-send/receive and mutex-lock/unlock.
What about fence, acquire/read fence, release/write fence.
* Does Gambit guarantee that newly allocated objects have some initialization when accessed from any CPU core, or if malloc() gave trash then may a Gambit thread see trash?
Example:
Context: (define x #f) (##fence!)
Core A: (set! x (vector 1 2))
Core B: (let loop () (let ((v x)) (if v (print (##vector-ref v 0))) (loop)))
Which are all possible values that may be printed?
C/C++ standardized SMP/memory model in the C11/C++11 specs. It's great to see Gambit spec the same now.
Thanks,
Adam
Re
critical sections, do you have primitives in SMP Gambit to force the
compiler to respect them, and then in the code output produce assembly
instructions that honor them?
Today in SMP Gambit, I presume thread-send/receive and
mutex-lock/unlock will abstract away the underlying architecture's
memory model, and ensure that thread-receive will have access to the
whole structure that thread-send sent.
Example:
GVM processor A:
(thread-send B (list 1 2))
(define l (list 3 4))
(set-car! l 5)
(thread-send B (list 1 2))
GVM processor B:
(let loop () (for-each print (thread-receive) (loop))
This
is to illustrate that thread-send/receive ensures that the list
elements will be accessible (= read correctly) at the receiving site. In
this case on any architecture, four print calls are done, each with the
respectove argument: 1, 2, 5, 4.
I
guess that design feature is helped by that the Gambit compiler will
not do optimizations that garble the code order, over module-external
procedure calls, and thread-send/receive & mutex-lock/unlock count
as procedures.
I presume on strongly ordered architectures the following will make 2 3 go through:
GVM processor A:
(define l (list 1 2))
(thread-send B l)
(set-car! l 3)
GVM processor B:
(define m (thread-receive))
(let loop ((i 0)) (if (not (eqv? (expt 3 12) i)) (loop (+ i 1)))
(for-each print m)
On a weakly ordered architecture the code would not SIGSEGV nor throw exception, but the first value could be 1 or 3.
Importantly
on a weakly ordered architecture the code would never cause a SIGSEGV,
because Gambit's memory management ensures that pending memory bus
transactions are flushed before it initiates collection of a dead
object.
Similarly any allocated object's space is immediately accessible on all cores:
Common workup:
(define s #f)
GVM processor A:
(set! s (make-u8vector 5))
GVM processor B:
(let loop () (let ((v s)) (if v (##u8vector-ref v 1)) (loop))
The
promise is limited to that the content is accessible only though - on a
weakly ordered architecture, it may not have been initialized yet and
therefore the type and range check in (u8vector-ref v 1) could fail,
(##u8vector-ref v 1) do not do that though and while it may return junk,
at least it will not crash.
The above two described behaviors are is in symmetry with any host OS' malloc/free.
Important references:
Sutter videos: