My vote:
Yes I vote for lowlevel also.
Gambit can not abstract away the way the hardware does memory alignment and ordering, so your question ("Should Gambit .. abstract[..], or .. programmer has to deal with such issues") is a bit ambiguous.
In order to provide a competent vote here, I read through the documentation of all of the mainstream CPU architectures (AMD64, Sparc, ARM, MIPS, IBM Power), and what I find is that there is indeed a universal way among all these architectures, to do data accesses inexpensively and atomically locally on a CPU core and between CPU cores, and that is by doing all your loads and stores naturally aligned, so that is, byte accesses on any address, word (16bit) accesses on memory addresses that are a multiple of 2, dword (32bit) on 4 byte multiples, and qword (64bit) accesses on 8 byte multiples [1].
As long as that convention is followed, then a value will always be loaded atomically (on any receiving core, following a store made on any core,) as in there will be no data destruction where your load retrieves a half-updated value.
The last consideration then is the memory ordering, and here, the strongly ordered architectures (AMD64 and Sparc) require no additional considerations, while the weakly ordered architectures (ARM, MIPS, IBM Power) need a barrier operation to force uncommitted stores to be flushed.
Therefore, an overall strategy for all of Gambit for functioning coherently in multicore (SMP) use is:
- Gambit locates all variable and other value slots on the heap to aligned memory addresses only.
This way, the passing of object references and unallocated objects between CPU cores will always happen gracefully as all Gambit accesses of such data will be in a way that is automatically atomic (as in corruption-proof) between CPU cores.
- Gambit provides a memory barrier primitive for weakly ordered CPU architectures only.
It could be called |force-order!|.
It's a NOOP on strongly ordered CPU architectures.
Next, with respect to Gambit's design, the way a user relates to Gambit and interfaces it, I presume will work like this:
- Gambit internal level:
Gambit internally is self-contained and does not need any particular intervention from the user to work on a given CPU architecture (for atomicity and ordering matters to work out, so that is, Gambit internally makes memory barriers as needed).
- Gambit runtime - with - user interface level:
The parts of Gambit's exports that relate to multiprocessing, so that would be message passing, IO, threading, locking (e.g. read/write-u8vector/u8 etc., thread-send/thread-receive, mutex-lock!/mutex-unlock! , thread-start!/thread-terminate!), should be multicore-proof by default. (Higher-speed non-multicore versions may be available.)
E.g.,
(define m (make-mutex)) (mutex-lock! m)
(define t (thread-start! (make-thread (lambda () (display (thread-receive)) (mutex-unlock! m))))) (thread-send t '(my struct))
is safe out of the box on any architecture - Gambit takes care of all atomicity, ordering, including memory barriers (on weakly ordered architectures).
Note here that thread-send must imply a memory barrier on a weakly ordered architectures, for the case that it runs on another CPU core.
- User level:
Obviously for execution within the local CPU core, no particular atomicity or ordering considerations are needed.
For access that does or may span CPU cores, the user must sugar the code properly with |force-order!| calls properly at the points where values have been mutated or new values have been allocated, and another CPU core will access those values.
The "crash profile" we get is that attempting to access on core B an object that was newly allocated on core A, may crash on a weakly ordered architecture, if the code was not properly sugared with |force-order!|:s.
A partially remedy exists in the form that for a structure that already has been flushed to core B, if a mutation is made from core A but is not flushed to core B, then any retrievals on core B will get the old value rather than a broken value, and so at least Gambit will not crash in such circumstances.
Example: If core A does (define v (vector #f)) (force-order!) ... (vector-set! v 0 #t), then any accesses from core B to the slot in |v| after the first |force-order!|, will be crash-proof for unallocated values, as in (vector-ref v 0) on core B will always retrieve a valid value (presuming that it is unallocated e.g. fixnum, boolean, character, a previously allocated symbol, etc.), the only risk would be that you might get an older value.
No such risk of crashes applies on strongly ordered architectures.
Please note that we use the atomicity guarantee that we get on all architectures for accesses that are naturally aligned, as a fundament for the management of object references (as these are ordinary 64/32bit values);
The atomicity guarantees may not necessarily apply for floating point values on weakly ordered architectures, so on those architectures any use of floating point values may need additional considerations, and which would be subject to another, separate discussion here.
One thought that comes to my mind here is, must the C compiler be given any particular instructions or type definitions, to deliver for this SMP usecase (as in, honor the atomicity and ordering)?
Feedback?
Adam