Hi,
While our code base for Butterfly is currently proprietary, I thought I would share a few lessons learned from my own re-implementation of Termite. We have had Butterfly successfully running and spawning remote threads and performing "RPC" across our own local area network among 6 different computers, running for hours on end, passing several millions of messages between them, with no problems, save for a few cautionary notes.
Our Butterfly images were written in domain specific Lisp, with all compiler dependencies relegated to a single source module of 80 lines of code. Compilers used were Lispworks for Mac and Windows, and Franz Allegro for Windows. We have a port to SBCL about 99% completed, but we got diverted from this activity before fully resolving our mapping to PThreads.
1. We found you do not need 3 full tasks to manage the socket proxy on each end of a network connection. We found that we needed only a reader and writer thread, with the reader in charge of spawning the writer. Out of order message delivery is a must for ease of protocol for the clients. The proxies perform an initial handshake and exchange of local identities so that each proxy can map incoming and outgoing messages between locally known ID's and the correct socket port.
2. The Lisp reader needs to be extended so that composite objects can be interchanged - e.g., service ID's, remote addresses, etc. We used CLOS to represent some of these higher-level objects, and extended our reader using prefix #U to allow for interchange of printable representations of these kinds of objects.
3. Machine image ID's do need to be UUID's for interchange with other computers, or even between separate processes on the same computer. Each running image has its own notion of process ID's that will possibly conflict with those of other processes. The only way to unabmiguously inform each other about PID's is to attach a UUID as part of the identifier. We use simple cardinal integers, symbols, and most often, keyword symbols, for local identifiers. Mapping tables must exist to translate any of these representations to actual process ID's and the thread object representing the running thread and its mailbox queue.
4. Exception processing needs careful supervision. A separate queue is attached to each thread to hold pending exceptions that can be generated by another running thread calling for termination of a sibling thread. When the afflicted thread resumes, it must check this exception queue before accessing normal messages.
To avoid imperative and side-effecting notions such as Erlang's use of Flags to represent supervisor tasks that should become safe against the failure of spawned and linked threads, we use an enclosing macro WITH-TRAPPED-EXCEPTIONS to make this more FPL in style.
5. An agent thread is needed at each end of a socket connection to do the bidding of the opposite end. For example, remote spawning requires a remote agent to perform a local spawn. Termite already has that. We extended the duties of the Agent to include remote killing of threads, remote lookup of various ID's, etc.
6. Despite what Erlang would have you believe, it is not always possible to have direct communications between remote threads. It is often the case that they are segregated from each other, e.g., on different sub-nets, or through a serial chain of servers. So direct replies to a RPC request are often impossible. Hence the need for forwarding proxies at each intermediate server location. Our proxy threads are transparent to the applications running on each node. This is made possible by the use of the PID translation table and the proxy substitution PID table mentioned above.
7. Under heavy traffic conditions it is important for the proxy threads to avoid excessive consing, thereby leading to excessive GC cycles. We use local queues of recyclable envelopes (vectors) that can be stuffed by the socket reader and writer threads for use in forwarding. Once these envelopes are no longer needed they are recycled back into the local queue.
Our initial tests among all the participating computers used a simple Echo server at the application level. That performed enough consing at each end to cause occasional long pauses in responses. Even though we are on a local area network, with typical average packet Ping response times of around 1-4 ms, we would see as much as 3-5 second pauses while one node or another performs a large GC cycle. So our timeouts had to be adjusted accordingly. But the median RPC response time was actually around 7 ms over several million cycles.
I'm sure there are a bunch of other ideas we came up with to solve each problem as it was encountered. On the whole the re-design effort was an enjoyable experience. The net was around 2300 LOC of Lisp after several rounds of refinements and speedup modifications. Of that, only 80 lines need to be examined for adapting to another compiler. We are now in the process of wrapping the communications with SSL to get necessary levels of commercial security. Our lab is one thing -- an environment for scientists -- but the real world can be a nasty place at times.
I hope this is helpful...
Dr. David McClain Sr. VP, Embedded Systems Asyrmatos Inc. Boston & Tucson phone: 520-529-2437 cell: 520-390-3995 web: www.asyrmatos.com e-mail: dbm@asyrmatos.com
Afficher les réponses par date