James Long wrote:
(A) use the refcount in still objects. One Gambit engine is the master, the others are slaves and increase/decrease the refcount as they need. This requires either the ability to run multiple Gambit runtimes in the same process, or an extension to make Gambit allocate still objects from another heap.
You would have to lock the the whole memory allocator per allocation, and every object per refcount increase/decrease. I'm not sure if it would be safe for the allocator to be garbage collecting across the buffer in parallel either - because it's just scanning for memory which has a refcount of 0 it might be safe. Still though, seems like you would get a bad case of lock contention.
Hm, what I've meant was: have one master runtime which allocated the object, and have other runtimes increment it's refcount to prevent the master from releasing it for as long as they like.
Lock contention (or other synchronization overhead, with atomic asm ops) is always a problem if you have to inc/dec a shared refcount. But if you want to write to the objects across runtimes anyway, you need some sort of locking/synchronization anyway.
You see, that's why I'm fan of my functional database idea--when you don't mutate objects, you won't have to tell any cpu that it changed. But for some things mutation is efficient; not sure if it's the case for the algorithms you have in mind. (PS. as long as you haven't advertised an objects, you can mutate them. So algorithms which e.g. fill a vector and only then advertise it to the other processes/threads, will not need locking. "Advertising" could mean send the object id over a Termite channel. (BTW each object could contain a read-only flag in it's representation, so that you get an error (in safe mode) should you try to modify it after it has been offered to others.)
(B) use wrapper objects around shared storage outside of the Gambit heap. E.g. you would use plain C arrays/datastructures in shared memory, each carrying a refcount, and the finalizers in the normal Gambit FFI objects around them can decrement the refcount and free the C datastructure wenn it drops to zero.
This could work, but the refcount would only be for each runtime.
Ehr no, the refcount is shared; it contains the number of parties still interested in the object. It is a means to avoid a global garbage collector.
Also, I'm assuming you'd have to use a separate memory allocator to manage this external heap. There's probably thread-safe memory allocators out there.
Yes, as I've mentioned below.
As for garbage collection, I think you'd have to manually free these objects.
No, not if you use those shared refcounts and FFI wrappers. (Except that if a Gambit runtime crashes, the refcount will not be decremented and the object thus never be freed, but maybe one could live with that.)
(Of course, if you have really big objects, freeing them manually could be an advantage because the memory can be reused quicker.)
(...)
That's certainly where it starts getting tricky. And it's where I feel like I want the system that deals with shared memory to be as explicit and restrictive as possible. It should almost be discouraged because of the complexities of it. Only the people who want to get dirty can allocate these special objects, manipulate their data, and deallocate them.
Well sounds like just using the FFI could get you far, then. (I did some of that stuff already, and the posix part of it is to be released with my pending modules release.)
Regarding freeing memory: the easy way is to use e.g. linux tmpfs and a separate file for each object. This should work well enough when only big objects are used. If it should work for small objects, probably one of the free memory allocation implementations could be used on a single file. (For many very small objects and purely functional data structures, a mostly-lockfree copying GC, as mentioned in the Mnesia thread, would be better.)
I'll have to look at the Mnesia thread and mostly-lockfree copying GC. I'm still unclear how you could have a thread-safe GC, even with refcounts.
With refcounts, you just need atomic increment/decrement operations (either using explicit locks or better something like the atomic code declarations which are now integrated in gcc (I haven't tried that yet)). Once the refcount drops to zero, the thread/process realizing it releases the object. Making the heap management itself thread safe will be more involved (if you're not just using tmpfs files), but as I said I think there are ready-made free implementations already. Or you could delegate allocation and freeing to a dedicated thread/process by sending it messages (this could just be sending 4 or 8 bytes of the address or length over a (unix domain) socket).
Christian.