Dear Marc & Gambiteers,
I was hoping to write a short email about a simple way to support asynchronously aborting threads, as per https://github.com/gambit/gambit/issues/275 Then I realized that the problem was (Faré's PhD thesis)-complete, and what I ended up writing was a statement of intent for the non-trivial hacking of Gambit that I need to achieve to complete my thesis at https://j.mp/FarePhD It's all connected, but I'll include copious background, so if you have time not to skip this message, go grab yourself some tea/coffee/etc.
I'm enjoying actor programming on Gambit Scheme (actually using Gerbil Scheme as a layer on top of it). But, especially so after I noticed an actor going crazy and busy looping with 100% of CPU, I realized that I *really* wanted to be able to develop robust actor systems in the style of Erlang — except on top of Gambit.
Erlang allows programmers to build extremely robust systems by being based on the principle that errors, failures and mistakes WILL happen, and that the system should as mattter of course easily recover from them — by killing and restarting the failed subsystems. To elucidate the paradigm shift in this approach to software, see notably the great 2017 paper by Tomas Petricek "Miscomputation in software Learning to live with errors" http://tomasp.net/academic/papers/failures/ or his much shorter 2015 blog post: http://tomasp.net/blog/2015/failures/
One key mechanism to achieving this very robust style of developing distributed systems based on actors (that Erlang calls "processes") is Erlang's ability to safely kill a process at any point in time. There are many reasons why a process may fail: its execution may hit a software bug; it may hit a hardware bug; it may be hit by cosmic rays or outer forces; it may fall victim to some "wrench" thrown by software like Chaos Monkey https://blog.codinghorror.com/working-with-the-chaos-monkey/ that deliberately introduces random failures into the system to ensure that robustness issues are found and addressed earlier rather than later; it may be targetted by some denial-of-service attack; it may exceed some resource threshhold; it may otherwise enter a state where it fails to correct respond to queries, especially so to semi-random semi-periodic probing queries by its supervisor. Whatever the reason, inasmuch as the supervisor can detect failure, it can safely kill the failing process, and restart a new one to replace it. The process will be unregistered from whatever service broker it was subscribed to, and the incoming request traffic will be picked up by its healthy registered peers until the replacement is fully operational.
Because interesting services are made of many actors (or "processes" in Erlang parlance) that act in concert and have mutually-dependent state, when a process dies (whether of natural or super-natural causes), all the processes linked to it (parents and children) are in turn sent a signal to shutdown graciously. They can explicitly catch and handle this signal if they really care to survive or to cleanup something before they die; but by default, the linked process just dies immediately, freeing all its resources; when it dies, so will a graceful shutdown signal be sent to its own linked processes, recursively, in a tree of related processes. In Erlang, this ability to safely kill entire process trees is essential to build an extremely robust architecture where large services made of many coordinated actors automatically restart in a coherent way when errors (or regular system upgrades) happen.
I have long dreamed to have this Erlang-style robustness in a Lisp — rather than building a Lisp on top of Erlang, like LFE, that while robust would miss a lot of the system programming tradition of Lisp and its performant compilers. And Gambit is oh so close to it, yet I realize still so far.
Importantly, in Erlang, the actor shutdown signal works asynchronously, at least by default, for regular actors that don't explicitly catch and handle these signals. This means that a regular process may die in the middle of whatever the hell it is doing. This works well in Erlang, because of its programming model where processes are made of pure functions and communicate exclusively via message passing. The model ensures by construction that there is precious little shared state that may be left in an invalid state when an asynchronous signal happens, only the message mailbox and a shared buffer extension sometimes used for performance reasons. And the system implementation ensures that accesses to this shared state are atomic with respect to asynchronous signal delivery, so the rest of the process is all private state and can be released without any resource leak.
Now, in Scheme (and most other languages, except maybe Haskell), there can be a LOT more shared state that may be left in disarray if a thread is interrupted in the middle of random operations. Stateful data structures are a common thing to use; if anything, making system calls or using libraries often involves a lot of state; the language implementation's runtime environment itself has plenty of shared state, and was never designed to play well with asynchronous interrupts. Which means that, if an asynchronous interrupt happens (a signal in Unix parlance), it must be expected that some this shared data will be in some intermediate state, and that killing the current thread would leave the program unstable and unable to operate correctly: a lock may be held that will never be released; the state protected by that lock may violate necessary invariants to its operation; some resource borrowed from another thread such as a handler granted by a server may never be released or otherwise complete its usage cycle the program may be experience a deadlocked or livelock; some distributed protocol that was previously initiated (e.g. voting, partaking in some transaction, etc.) may never complete; another thread waiting on a spinlock may spin the CPU forever in a tight loop; if a low-level invariant is broken, the program may crash in ugly low-level ways, or worst of all, it may return wrong answers and do the wrong thing to your system — which can conceivably cause death and/or loss of millions of dollars.
On the other hand, if you fail to interrupt the thread when it is failing, then it might keep running in a zombie state that eats all your CPU or holds onto critical resources (shared data structures, sheer amount of memory, file handles, etc.) that blocks the other computations from successfully making progress. Or its known failing state may lead to corruption of critical data. In this case too, costly or deadly failure may happen.
Therefore, in Scheme, as in most languages, at least at present, the limited solutions to providing an ersatz of Erlang-style robustness are as follow: 1- Do NOT allow for asynchronous killing at all at the Scheme level. Have only synchronous killing at the Scheme level. 2- Socially enforce a convention that all actors should regularly go back to the message loop, and that there should never be a deadlock, live lock, non-terminating computation or runaway code execution between two consecutive calls home to the message handling loop. 3- If some algorithm require indefinitely long computations, their implementation must maintain a discipline of "cooperative multitasking", like in the bad old days of the 1980s, whereby these long-lived computations will be specially modified to periodically "yield" execution and give the message loop process the opportunity to process any synchronous shutdown message while the program is in a stable state. 4- Consider Scheme as a replacement not for Erlang, but for the lower-level language in which the Erlang VM is implemented (i.e. C), that has to deal with all the ugly synchronization details, without being able to fully abstract over them. 5- Build further abstractions over this lower-level language, and stick to them by social convention. A regular Scheme cannot enforce these social conventions and prevent users from breaking the abstractions and reaching into the implementation details; however, Gerbil allows you to build and enforce a full abstraction for module, thanks to its Racket-like #lang feature, that impose global (rather than merely local) restrictions on what a module can express. 6- If you really want a group of actors that live and die together, put them in a same Operating System level process (and either use OS process groups to implement trees of related processes, or implement yourself that notion using some kind of supervisor process). Then you can kill and restart the entire process (or set of processes). Unlike Erlang processes or Gambit threads, It's heavy weight; but it works, and sometimes that's what's exactly needed. 7- In general, as much as possible, use pure functional style and/or restrict side-effects to local state that is private (not shared), thus reducing issues related to shared state for processes that use this style. However, because the Scheme implementation's runtime and the available libraries were never designed for asynchronous interrupts, their own use of shared resources can still cause catastrophic failures in case of asynchronous aborts.
This strategy of course works, but leads to code that is awkward, inefficient, not modular, tiresome and error-prone to write, impractical except at a small scale, and still fragile. It is not satisfactory to only provide fragile constructs that will explode if users fail to respect non-trivial coding conventions and maintain them as the software evolves. This issue really calls for some robust abstraction mechanism that will automatically enforce any invariant though coherent automated code generation rather than manual discipline. Well, at least, Scheme is not worst than any other random language. The only languages that stand out for their robustness are those based on the Erlang VM, BEAM, i.e. Erlang itself, Elixier, LFE, Efene, Joxa, and whatever Erlang flavor of the day.
Now, what I would really like is to enhance Gambit Scheme with basic mechanisms to really allow safe asynchronous killing of threads. I told vyzo and he opened issue https://github.com/gambit/gambit/issues/275 on asynchronous aborts. My first reflex was to think that if you somehow have a notion of pseudo-atomic code blocks and you can ensure that asynchronous signals are deferred until the end of current code block, then everything will be fine. Cleanup forms in "finally" clauses or dynamic-wind forms may have to be considered atomic, or at least start with interrupts disabled. But otherwise, it should be pretty much a straightforward extension of what the GVM already supports for the sake of e.g. garbage collection, right? Nope.
It actually takes a whole lot to make proper asynchronous interrupts work in presence of shared state. After thinking about the issue a bit more, I realized that it's actually the very same problem that plagued me for years, and that I have solved in theory my (incomplete) PhD thesis, but that still requires a practical implementation. And I also realized that my thesis has a solid argument why there is no shortcut to the complete solution proposed in my thesis, of a general protocol for declaring "observability" of computations.
Indeed, for each level of abstraction that you (or your users) care about, there will be high-level invariants on the shared state that, if broken, leave the entire program unable to make progress at that level of abstraction, even though the state may be perfectly fine at lower-levels of abstractions. Solving the problem at a low-level of abstraction can never be enough to solve the issue at higher levels of abstraction, that the lower-levels are only a means to support. Thus, you can never safely kill any thread in any existing language, with the exception of Erlang.
Yet, Erlang does it for all programs. And if you look carefully, you'll see that each and every programming language with preemptive user-level threads or a garbage collector supports pseudo-atomic blocks and properly deferred asynchronous signal delivery to suitable "safe points", so the invariants of its own virtual machine are enforced before a context switch may proceed without the asynchronous signal handler interfering with low-level implementation details of the language's virtual machine. In the case of Gambit, quite remarkably, asynchronous signal handling by the system is compatible since 2015 with migrating processes from one GVM to another, e.g. C to JS to PHP — by making sure the signal to migrate is only processed at safe points relative to the GVM.
To find a general solution to the issue, you must first step back and look at the bigger picture: software can be seen as a "semantic tower", where each layer is the implementation of some more abstract computation A using some more concrete computation C. For instance, your program implements a user abstraction U on top of your programming language abstraction P; the compiler you use implement this abstraction P in terms of a lower-level virtual machine V. Then a lower layer expresses V in terms of a low-level view O of the system as provided by the operating system. The operating system itself implements O in terms of the documented CPU and chipset semantics C. C may include microcode that realize the CPU abstraction in terms of a digital circuit D. D is implemented as transistors in terms of analog electrical circuits E. E is implemented in terms of quantum mechanics Q. Q is implemented by God in terms of his own digital physics computer a la Ed Fredkin. Many more abstraction levels may exist above, below, or in the middle, that were omitted in this list, yet may be added when observing the semantic tower from a wider point of view or with a finer resolution of details.
From this point of view, the issue of asynchronous signal handling is
then that at each layer of implementation, a low-level asynchronous interrupt signals may be received at a safe point for the lower level of abstraction, but that the implementation may want to deliver a higher-level asynchronous signal, to be handled at a safe point for the higher level of abstraction. Each level of abstraction thus has its own notion of safe point, with its own restrictive invariants, that its implementation must express in terms of the lower abstraction's level of safe-point, using the language in which it is written, that is expressed in terms of that lower abstraction's state and its laxer invariants. The general architecture of this semantic tower must therefore support "lifting" the notion of safe point, so that a higher-level safe point may be recovered from a lower-level safe point. In my thesis I call the corresponding property of implementations that can lift this notion of safe-point "observability". The developer in charge of providing an abstraction level must make sure it can never be caught "with its pants down" (to reuse the metaphor by ITS hackers, as narrated by Alan Bawden in his great article on PCLSRing, an early documented instance of the notion of observability, in an 1960s operating system). And he must for that use on the lower-level system provided by the programming language he uses, that he may hopefully rely on itself never being caught with their pants down, but only observed in stable states.
Therefore, when an asynchronous signal is received for which a handler is registered at a given level of abstraction A, the system must somehow synchronize to a safe point for A before to run the handler, and in general this level may be higher than that of Gambit's virtual machine. Furthermore, in the case of aborting a thread, this level of abstraction is the highest at which this thread matters to anyone (user, or supervisor program that knows how to rebuild higher abstractions).
In simple cases, recovering a safe point for a level of abstraction A is simply a case of letting the code run, and checking at each safe point reached whether an interrupt was received that requires processing at that level of abstraction (or one below). But for many reasons may require to support less simple cases: there may be ongoing transactions that need to be rolled back (aborted) or rolled forward (eagerly completed, or maybe partially completed but with some clean stable state register that will cause a follow up transaction); the abstract state may be a composite of the states of several concurrent systems, that may each have to be stopped and synchronized to an observable state; performance may require shortcuts to be taken in the regular case that have to be compensated for when an interrupt is caught. In the most general case, whichever programmer is specifying the abstraction level A is himself using a programming language providing a more concrete level of abstraction C. When specifying a handler of asynchronous signals to recover a stable state at level A, the programmer necessarily needs to express it the language he is using, in terms of the state at level C. Therefore, for that handler to run and synchronize to a safe point for A, the platform should first be able not just to synchronize to a safe point for C, but also to let the handler observe (recover, extract, reconstruct, inspect) the state at level C. Now, since there may be an even higher abstraction level H on top of A, it is not enough to synchronize to a safe point for A, the platform must also support observing (recoving, extracting, reconstructing, inspecting) the state at the level A so the handler of the implementation of H with A can itself specify how to synchronize to a safe point for H and let yet higher levels of abstractions observe the state at level H.
Now, a naive understanding of "recovering the state at level A" can be expensive: you don't want to serialize the entire state of the virtual machine (potentially gigabytes of memory or more) every time you process an asynchronous interrupt handler. You want this recovery to be lazy, so only the bits of state actually required by the handler need to be partially reified at the required level of abstraction. A naive implementation of safe points would create a closure to express this recovery, at every safe point. A slightly less naive implementation would only create that closure *if* an interrupt was caught at that safe point. Therefore, the general protocol for a safe point is therefore to have some kind of special form (safe-point level state), where level is some kind of object identifying the level of abstraction of the safe point (if possible known at compile-time, usually implicit when discussing safe points of a well-identified layer of the semantic tower), and state is a form only evaluated when an interrupt is caught at said level, that permits recovery of the state at specified abstraction level, if possible lazily.
The compiler hopefully knows how to merge safe-points between levels of abstractions, so that tests for asynchronous interrupts at higher level safe-point and creation of corresponding higher-level state objects only happen if an asynchronous interrupt was already caught at the corresponding lower-level safe-point, yet wasn't handled already by a lower-level handler. An even better compiler would eliminate redundant consecutive safe point checking, so e.g. check points are only checked at the beginning of functions or loops (just like the implementation already does for its own lower-level checkpoints).
Now, it is not enough to have compiler support. The runtime library must also be written in a way that supports asynchronous interrupts, and the programming language must provide suitable abstractions. Notably, when allocating *any* kind of resource that an asynchronous interrupt may necessitate to release, the atomic operation with respect to interrupts should be not merely allocating the resource, but allocating it AND atomically binding some variable to it; only then may a "finally" clause properly release the resource without a leak should an asynchronous abort be received. (The "finally" clause will also handle synchronous exceptions or regular exit). Potentially long-running library functions, and especially higher-order functions, may also have their own issues with respect to declaring safe-points for higher levels of abstraction within the dynamic extent of their function call. When an abstraction level reexports such functionality from lower levels, it may have to subtly wrap this functionality in variants that suitably handle safe-points. And the compiler may have to be able to suitably optimize away most wrappers.
There is also the case when a thread receives a further asynchronous abort in the middle of processing an existing one; or when it gets stuck while executing cleanup forms in general. My understanding is that asynchronous aborts are specified with a target level of abstraction. By default, an abort signal (as a Unix kill -TERM) works at the highest level of abstraction that the programmer cares about, and should run all the cleanup forms. If the operator gets impatient, he may send signals with lower levels of target abstraction (down to a Unix kill -KILL), at which point levels of abstractions higher than the target level are invalidated, their cleanup forms are eschewed, and all linked processes at this level of abstraction are killed (and hopefully restarted by their supervisor). It is therefore possible to "lose" a layer of abstraction -- if there was a bug in the implementation of this layer of abstraction, at which point, well, that is exactly what "having a bug" means.
All in all, it's a lot of non-trivial work, especially since I need to modify the Gambit compiler to itself follow the protocol for the layers between Scheme and the GVM (it already follows it for the layers below the GVM, yay Marc!). But the result might be worth it, because, as I argue in my thesis (incomplete, but you can already read 141 pages worth of it), successfully enforcing this protocol unlocks an entire world of further cool features. I solved it all in theory. But since this is computing, not mathematics, theory is not enough and now I need to work on the implementation.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org Happiness is a journey, not a destination; happiness is to be found along the way not at the end of the road, for then the journey is over and it's too late. The time for happiness is today not tomorrow. — Paul H. Dunn
Afficher les réponses par date
Quite a few words in there! Very solid exposition of a difficult problem.
Some first thoughts regarding implementation: We can go quite aways with two primitives:
(thread-raise! thread obj) (begin-atomic body ...)
thread-raise! is a generalization of thread-abort! that asynchronously raises an exception in the first safe-point for the target thread. A safe point is defined when interrupts are enabled _and_ asynchronous exceptions are not masked by an atomic regions.
Atomic regions are demarcated with begin-atomic, which acts like begin only its equivalent to incrementing an per-processor atomic state counter for the dynamic extent of the body. When this counter is > 0, asynchronous exceptions are masked. When the counter is decremented back to zero, a pending asynchronous exception can be raised.
Wrt to dynamic winds, both the wind and unwind thunks must be considered atomic. Furthermore, if the wind thunk has been evaluated, then the unwind thunk should be guaranteed to be evaluated as well in the presence of asynchronous exceptions in the body thunk.
-- vyzo
On Sun, Jul 30, 2017 at 10:02 PM, Faré fahree@gmail.com wrote:
Dear Marc & Gambiteers,
I was hoping to write a short email about a simple way to support asynchronously aborting threads, as per https://github.com/gambit/gambit/issues/275 Then I realized that the problem was (Faré's PhD thesis)-complete, and what I ended up writing was a statement of intent for the non-trivial hacking of Gambit that I need to achieve to complete my thesis at https://j.mp/FarePhD It's all connected, but I'll include copious background, so if you have time not to skip this message, go grab yourself some tea/coffee/etc.
I'm enjoying actor programming on Gambit Scheme (actually using Gerbil Scheme as a layer on top of it). But, especially so after I noticed an actor going crazy and busy looping with 100% of CPU, I realized that I *really* wanted to be able to develop robust actor systems in the style of Erlang — except on top of Gambit.
Erlang allows programmers to build extremely robust systems by being based on the principle that errors, failures and mistakes WILL happen, and that the system should as mattter of course easily recover from them — by killing and restarting the failed subsystems. To elucidate the paradigm shift in this approach to software, see notably the great 2017 paper by Tomas Petricek "Miscomputation in software Learning to live with errors" http://tomasp.net/academic/papers/failures/ or his much shorter 2015 blog post: http://tomasp.net/blog/2015/failures/
One key mechanism to achieving this very robust style of developing distributed systems based on actors (that Erlang calls "processes") is Erlang's ability to safely kill a process at any point in time. There are many reasons why a process may fail: its execution may hit a software bug; it may hit a hardware bug; it may be hit by cosmic rays or outer forces; it may fall victim to some "wrench" thrown by software like Chaos Monkey https://blog.codinghorror.com/working-with-the-chaos-monkey/ that deliberately introduces random failures into the system to ensure that robustness issues are found and addressed earlier rather than later; it may be targetted by some denial-of-service attack; it may exceed some resource threshhold; it may otherwise enter a state where it fails to correct respond to queries, especially so to semi-random semi-periodic probing queries by its supervisor. Whatever the reason, inasmuch as the supervisor can detect failure, it can safely kill the failing process, and restart a new one to replace it. The process will be unregistered from whatever service broker it was subscribed to, and the incoming request traffic will be picked up by its healthy registered peers until the replacement is fully operational.
Because interesting services are made of many actors (or "processes" in Erlang parlance) that act in concert and have mutually-dependent state, when a process dies (whether of natural or super-natural causes), all the processes linked to it (parents and children) are in turn sent a signal to shutdown graciously. They can explicitly catch and handle this signal if they really care to survive or to cleanup something before they die; but by default, the linked process just dies immediately, freeing all its resources; when it dies, so will a graceful shutdown signal be sent to its own linked processes, recursively, in a tree of related processes. In Erlang, this ability to safely kill entire process trees is essential to build an extremely robust architecture where large services made of many coordinated actors automatically restart in a coherent way when errors (or regular system upgrades) happen.
I have long dreamed to have this Erlang-style robustness in a Lisp — rather than building a Lisp on top of Erlang, like LFE, that while robust would miss a lot of the system programming tradition of Lisp and its performant compilers. And Gambit is oh so close to it, yet I realize still so far.
Importantly, in Erlang, the actor shutdown signal works asynchronously, at least by default, for regular actors that don't explicitly catch and handle these signals. This means that a regular process may die in the middle of whatever the hell it is doing. This works well in Erlang, because of its programming model where processes are made of pure functions and communicate exclusively via message passing. The model ensures by construction that there is precious little shared state that may be left in an invalid state when an asynchronous signal happens, only the message mailbox and a shared buffer extension sometimes used for performance reasons. And the system implementation ensures that accesses to this shared state are atomic with respect to asynchronous signal delivery, so the rest of the process is all private state and can be released without any resource leak.
Now, in Scheme (and most other languages, except maybe Haskell), there can be a LOT more shared state that may be left in disarray if a thread is interrupted in the middle of random operations. Stateful data structures are a common thing to use; if anything, making system calls or using libraries often involves a lot of state; the language implementation's runtime environment itself has plenty of shared state, and was never designed to play well with asynchronous interrupts. Which means that, if an asynchronous interrupt happens (a signal in Unix parlance), it must be expected that some this shared data will be in some intermediate state, and that killing the current thread would leave the program unstable and unable to operate correctly: a lock may be held that will never be released; the state protected by that lock may violate necessary invariants to its operation; some resource borrowed from another thread such as a handler granted by a server may never be released or otherwise complete its usage cycle the program may be experience a deadlocked or livelock; some distributed protocol that was previously initiated (e.g. voting, partaking in some transaction, etc.) may never complete; another thread waiting on a spinlock may spin the CPU forever in a tight loop; if a low-level invariant is broken, the program may crash in ugly low-level ways, or worst of all, it may return wrong answers and do the wrong thing to your system — which can conceivably cause death and/or loss of millions of dollars.
On the other hand, if you fail to interrupt the thread when it is failing, then it might keep running in a zombie state that eats all your CPU or holds onto critical resources (shared data structures, sheer amount of memory, file handles, etc.) that blocks the other computations from successfully making progress. Or its known failing state may lead to corruption of critical data. In this case too, costly or deadly failure may happen.
Therefore, in Scheme, as in most languages, at least at present, the limited solutions to providing an ersatz of Erlang-style robustness are as follow: 1- Do NOT allow for asynchronous killing at all at the Scheme level. Have only synchronous killing at the Scheme level. 2- Socially enforce a convention that all actors should regularly go back to the message loop, and that there should never be a deadlock, live lock, non-terminating computation or runaway code execution between two consecutive calls home to the message handling loop. 3- If some algorithm require indefinitely long computations, their implementation must maintain a discipline of "cooperative multitasking", like in the bad old days of the 1980s, whereby these long-lived computations will be specially modified to periodically "yield" execution and give the message loop process the opportunity to process any synchronous shutdown message while the program is in a stable state. 4- Consider Scheme as a replacement not for Erlang, but for the lower-level language in which the Erlang VM is implemented (i.e. C), that has to deal with all the ugly synchronization details, without being able to fully abstract over them. 5- Build further abstractions over this lower-level language, and stick to them by social convention. A regular Scheme cannot enforce these social conventions and prevent users from breaking the abstractions and reaching into the implementation details; however, Gerbil allows you to build and enforce a full abstraction for module, thanks to its Racket-like #lang feature, that impose global (rather than merely local) restrictions on what a module can express. 6- If you really want a group of actors that live and die together, put them in a same Operating System level process (and either use OS process groups to implement trees of related processes, or implement yourself that notion using some kind of supervisor process). Then you can kill and restart the entire process (or set of processes). Unlike Erlang processes or Gambit threads, It's heavy weight; but it works, and sometimes that's what's exactly needed. 7- In general, as much as possible, use pure functional style and/or restrict side-effects to local state that is private (not shared), thus reducing issues related to shared state for processes that use this style. However, because the Scheme implementation's runtime and the available libraries were never designed for asynchronous interrupts, their own use of shared resources can still cause catastrophic failures in case of asynchronous aborts.
This strategy of course works, but leads to code that is awkward, inefficient, not modular, tiresome and error-prone to write, impractical except at a small scale, and still fragile. It is not satisfactory to only provide fragile constructs that will explode if users fail to respect non-trivial coding conventions and maintain them as the software evolves. This issue really calls for some robust abstraction mechanism that will automatically enforce any invariant though coherent automated code generation rather than manual discipline. Well, at least, Scheme is not worst than any other random language. The only languages that stand out for their robustness are those based on the Erlang VM, BEAM, i.e. Erlang itself, Elixier, LFE, Efene, Joxa, and whatever Erlang flavor of the day.
Now, what I would really like is to enhance Gambit Scheme with basic mechanisms to really allow safe asynchronous killing of threads. I told vyzo and he opened issue https://github.com/gambit/gambit/issues/275 on asynchronous aborts. My first reflex was to think that if you somehow have a notion of pseudo-atomic code blocks and you can ensure that asynchronous signals are deferred until the end of current code block, then everything will be fine. Cleanup forms in "finally" clauses or dynamic-wind forms may have to be considered atomic, or at least start with interrupts disabled. But otherwise, it should be pretty much a straightforward extension of what the GVM already supports for the sake of e.g. garbage collection, right? Nope.
It actually takes a whole lot to make proper asynchronous interrupts work in presence of shared state. After thinking about the issue a bit more, I realized that it's actually the very same problem that plagued me for years, and that I have solved in theory my (incomplete) PhD thesis, but that still requires a practical implementation. And I also realized that my thesis has a solid argument why there is no shortcut to the complete solution proposed in my thesis, of a general protocol for declaring "observability" of computations.
Indeed, for each level of abstraction that you (or your users) care about, there will be high-level invariants on the shared state that, if broken, leave the entire program unable to make progress at that level of abstraction, even though the state may be perfectly fine at lower-levels of abstractions. Solving the problem at a low-level of abstraction can never be enough to solve the issue at higher levels of abstraction, that the lower-levels are only a means to support. Thus, you can never safely kill any thread in any existing language, with the exception of Erlang.
Yet, Erlang does it for all programs. And if you look carefully, you'll see that each and every programming language with preemptive user-level threads or a garbage collector supports pseudo-atomic blocks and properly deferred asynchronous signal delivery to suitable "safe points", so the invariants of its own virtual machine are enforced before a context switch may proceed without the asynchronous signal handler interfering with low-level implementation details of the language's virtual machine. In the case of Gambit, quite remarkably, asynchronous signal handling by the system is compatible since 2015 with migrating processes from one GVM to another, e.g. C to JS to PHP — by making sure the signal to migrate is only processed at safe points relative to the GVM.
To find a general solution to the issue, you must first step back and look at the bigger picture: software can be seen as a "semantic tower", where each layer is the implementation of some more abstract computation A using some more concrete computation C. For instance, your program implements a user abstraction U on top of your programming language abstraction P; the compiler you use implement this abstraction P in terms of a lower-level virtual machine V. Then a lower layer expresses V in terms of a low-level view O of the system as provided by the operating system. The operating system itself implements O in terms of the documented CPU and chipset semantics C. C may include microcode that realize the CPU abstraction in terms of a digital circuit D. D is implemented as transistors in terms of analog electrical circuits E. E is implemented in terms of quantum mechanics Q. Q is implemented by God in terms of his own digital physics computer a la Ed Fredkin. Many more abstraction levels may exist above, below, or in the middle, that were omitted in this list, yet may be added when observing the semantic tower from a wider point of view or with a finer resolution of details.
From this point of view, the issue of asynchronous signal handling is then that at each layer of implementation, a low-level asynchronous interrupt signals may be received at a safe point for the lower level of abstraction, but that the implementation may want to deliver a higher-level asynchronous signal, to be handled at a safe point for the higher level of abstraction. Each level of abstraction thus has its own notion of safe point, with its own restrictive invariants, that its implementation must express in terms of the lower abstraction's level of safe-point, using the language in which it is written, that is expressed in terms of that lower abstraction's state and its laxer invariants. The general architecture of this semantic tower must therefore support "lifting" the notion of safe point, so that a higher-level safe point may be recovered from a lower-level safe point. In my thesis I call the corresponding property of implementations that can lift this notion of safe-point "observability". The developer in charge of providing an abstraction level must make sure it can never be caught "with its pants down" (to reuse the metaphor by ITS hackers, as narrated by Alan Bawden in his great article on PCLSRing, an early documented instance of the notion of observability, in an 1960s operating system). And he must for that use on the lower-level system provided by the programming language he uses, that he may hopefully rely on itself never being caught with their pants down, but only observed in stable states.
Therefore, when an asynchronous signal is received for which a handler is registered at a given level of abstraction A, the system must somehow synchronize to a safe point for A before to run the handler, and in general this level may be higher than that of Gambit's virtual machine. Furthermore, in the case of aborting a thread, this level of abstraction is the highest at which this thread matters to anyone (user, or supervisor program that knows how to rebuild higher abstractions).
In simple cases, recovering a safe point for a level of abstraction A is simply a case of letting the code run, and checking at each safe point reached whether an interrupt was received that requires processing at that level of abstraction (or one below). But for many reasons may require to support less simple cases: there may be ongoing transactions that need to be rolled back (aborted) or rolled forward (eagerly completed, or maybe partially completed but with some clean stable state register that will cause a follow up transaction); the abstract state may be a composite of the states of several concurrent systems, that may each have to be stopped and synchronized to an observable state; performance may require shortcuts to be taken in the regular case that have to be compensated for when an interrupt is caught. In the most general case, whichever programmer is specifying the abstraction level A is himself using a programming language providing a more concrete level of abstraction C. When specifying a handler of asynchronous signals to recover a stable state at level A, the programmer necessarily needs to express it the language he is using, in terms of the state at level C. Therefore, for that handler to run and synchronize to a safe point for A, the platform should first be able not just to synchronize to a safe point for C, but also to let the handler observe (recover, extract, reconstruct, inspect) the state at level C. Now, since there may be an even higher abstraction level H on top of A, it is not enough to synchronize to a safe point for A, the platform must also support observing (recoving, extracting, reconstructing, inspecting) the state at the level A so the handler of the implementation of H with A can itself specify how to synchronize to a safe point for H and let yet higher levels of abstractions observe the state at level H.
Now, a naive understanding of "recovering the state at level A" can be expensive: you don't want to serialize the entire state of the virtual machine (potentially gigabytes of memory or more) every time you process an asynchronous interrupt handler. You want this recovery to be lazy, so only the bits of state actually required by the handler need to be partially reified at the required level of abstraction. A naive implementation of safe points would create a closure to express this recovery, at every safe point. A slightly less naive implementation would only create that closure *if* an interrupt was caught at that safe point. Therefore, the general protocol for a safe point is therefore to have some kind of special form (safe-point level state), where level is some kind of object identifying the level of abstraction of the safe point (if possible known at compile-time, usually implicit when discussing safe points of a well-identified layer of the semantic tower), and state is a form only evaluated when an interrupt is caught at said level, that permits recovery of the state at specified abstraction level, if possible lazily.
The compiler hopefully knows how to merge safe-points between levels of abstractions, so that tests for asynchronous interrupts at higher level safe-point and creation of corresponding higher-level state objects only happen if an asynchronous interrupt was already caught at the corresponding lower-level safe-point, yet wasn't handled already by a lower-level handler. An even better compiler would eliminate redundant consecutive safe point checking, so e.g. check points are only checked at the beginning of functions or loops (just like the implementation already does for its own lower-level checkpoints).
Now, it is not enough to have compiler support. The runtime library must also be written in a way that supports asynchronous interrupts, and the programming language must provide suitable abstractions. Notably, when allocating *any* kind of resource that an asynchronous interrupt may necessitate to release, the atomic operation with respect to interrupts should be not merely allocating the resource, but allocating it AND atomically binding some variable to it; only then may a "finally" clause properly release the resource without a leak should an asynchronous abort be received. (The "finally" clause will also handle synchronous exceptions or regular exit). Potentially long-running library functions, and especially higher-order functions, may also have their own issues with respect to declaring safe-points for higher levels of abstraction within the dynamic extent of their function call. When an abstraction level reexports such functionality from lower levels, it may have to subtly wrap this functionality in variants that suitably handle safe-points. And the compiler may have to be able to suitably optimize away most wrappers.
There is also the case when a thread receives a further asynchronous abort in the middle of processing an existing one; or when it gets stuck while executing cleanup forms in general. My understanding is that asynchronous aborts are specified with a target level of abstraction. By default, an abort signal (as a Unix kill -TERM) works at the highest level of abstraction that the programmer cares about, and should run all the cleanup forms. If the operator gets impatient, he may send signals with lower levels of target abstraction (down to a Unix kill -KILL), at which point levels of abstractions higher than the target level are invalidated, their cleanup forms are eschewed, and all linked processes at this level of abstraction are killed (and hopefully restarted by their supervisor). It is therefore possible to "lose" a layer of abstraction -- if there was a bug in the implementation of this layer of abstraction, at which point, well, that is exactly what "having a bug" means.
All in all, it's a lot of non-trivial work, especially since I need to modify the Gambit compiler to itself follow the protocol for the layers between Scheme and the GVM (it already follows it for the layers below the GVM, yay Marc!). But the result might be worth it, because, as I argue in my thesis (incomplete, but you can already read 141 pages worth of it), successfully enforcing this protocol unlocks an entire world of further cool features. I solved it all in theory. But since this is computing, not mathematics, theory is not enough and now I need to work on the implementation.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org Happiness is a journey, not a destination; happiness is to be found along the way not at the end of the road, for then the journey is over and it's too late. The time for happiness is today not tomorrow. — Paul H. Dunn _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
...only its equivalent to incrementing an per-processor atomic state ...
this should be per-thread.
-- vyzo
On Sun, Jul 30, 2017 at 11:04 PM, Dimitris Vyzovitis vyzo@hackzen.org wrote:
Quite a few words in there! Very solid exposition of a difficult problem.
Some first thoughts regarding implementation: We can go quite aways with two primitives:
(thread-raise! thread obj) (begin-atomic body ...)
thread-raise! is a generalization of thread-abort! that asynchronously raises an exception in the first safe-point for the target thread. A safe point is defined when interrupts are enabled _and_ asynchronous exceptions are not masked by an atomic regions.
Atomic regions are demarcated with begin-atomic, which acts like begin only its equivalent to incrementing an per-processor atomic state counter for the dynamic extent of the body. When this counter is > 0, asynchronous exceptions are masked. When the counter is decremented back to zero, a pending asynchronous exception can be raised.
Wrt to dynamic winds, both the wind and unwind thunks must be considered atomic. Furthermore, if the wind thunk has been evaluated, then the unwind thunk should be guaranteed to be evaluated as well in the presence of asynchronous exceptions in the body thunk.
-- vyzo
On Sun, Jul 30, 2017 at 10:02 PM, Faré fahree@gmail.com wrote:
Dear Marc & Gambiteers,
I was hoping to write a short email about a simple way to support asynchronously aborting threads, as per https://github.com/gambit/gambit/issues/275 Then I realized that the problem was (Faré's PhD thesis)-complete, and what I ended up writing was a statement of intent for the non-trivial hacking of Gambit that I need to achieve to complete my thesis at https://j.mp/FarePhD It's all connected, but I'll include copious background, so if you have time not to skip this message, go grab yourself some tea/coffee/etc.
I'm enjoying actor programming on Gambit Scheme (actually using Gerbil Scheme as a layer on top of it). But, especially so after I noticed an actor going crazy and busy looping with 100% of CPU, I realized that I *really* wanted to be able to develop robust actor systems in the style of Erlang — except on top of Gambit.
Erlang allows programmers to build extremely robust systems by being based on the principle that errors, failures and mistakes WILL happen, and that the system should as mattter of course easily recover from them — by killing and restarting the failed subsystems. To elucidate the paradigm shift in this approach to software, see notably the great 2017 paper by Tomas Petricek "Miscomputation in software Learning to live with errors" http://tomasp.net/academic/papers/failures/ or his much shorter 2015 blog post: http://tomasp.net/blog/2015/failures/
One key mechanism to achieving this very robust style of developing distributed systems based on actors (that Erlang calls "processes") is Erlang's ability to safely kill a process at any point in time. There are many reasons why a process may fail: its execution may hit a software bug; it may hit a hardware bug; it may be hit by cosmic rays or outer forces; it may fall victim to some "wrench" thrown by software like Chaos Monkey https://blog.codinghorror.com/working-with-the-chaos-monkey/ that deliberately introduces random failures into the system to ensure that robustness issues are found and addressed earlier rather than later; it may be targetted by some denial-of-service attack; it may exceed some resource threshhold; it may otherwise enter a state where it fails to correct respond to queries, especially so to semi-random semi-periodic probing queries by its supervisor. Whatever the reason, inasmuch as the supervisor can detect failure, it can safely kill the failing process, and restart a new one to replace it. The process will be unregistered from whatever service broker it was subscribed to, and the incoming request traffic will be picked up by its healthy registered peers until the replacement is fully operational.
Because interesting services are made of many actors (or "processes" in Erlang parlance) that act in concert and have mutually-dependent state, when a process dies (whether of natural or super-natural causes), all the processes linked to it (parents and children) are in turn sent a signal to shutdown graciously. They can explicitly catch and handle this signal if they really care to survive or to cleanup something before they die; but by default, the linked process just dies immediately, freeing all its resources; when it dies, so will a graceful shutdown signal be sent to its own linked processes, recursively, in a tree of related processes. In Erlang, this ability to safely kill entire process trees is essential to build an extremely robust architecture where large services made of many coordinated actors automatically restart in a coherent way when errors (or regular system upgrades) happen.
I have long dreamed to have this Erlang-style robustness in a Lisp — rather than building a Lisp on top of Erlang, like LFE, that while robust would miss a lot of the system programming tradition of Lisp and its performant compilers. And Gambit is oh so close to it, yet I realize still so far.
Importantly, in Erlang, the actor shutdown signal works asynchronously, at least by default, for regular actors that don't explicitly catch and handle these signals. This means that a regular process may die in the middle of whatever the hell it is doing. This works well in Erlang, because of its programming model where processes are made of pure functions and communicate exclusively via message passing. The model ensures by construction that there is precious little shared state that may be left in an invalid state when an asynchronous signal happens, only the message mailbox and a shared buffer extension sometimes used for performance reasons. And the system implementation ensures that accesses to this shared state are atomic with respect to asynchronous signal delivery, so the rest of the process is all private state and can be released without any resource leak.
Now, in Scheme (and most other languages, except maybe Haskell), there can be a LOT more shared state that may be left in disarray if a thread is interrupted in the middle of random operations. Stateful data structures are a common thing to use; if anything, making system calls or using libraries often involves a lot of state; the language implementation's runtime environment itself has plenty of shared state, and was never designed to play well with asynchronous interrupts. Which means that, if an asynchronous interrupt happens (a signal in Unix parlance), it must be expected that some this shared data will be in some intermediate state, and that killing the current thread would leave the program unstable and unable to operate correctly: a lock may be held that will never be released; the state protected by that lock may violate necessary invariants to its operation; some resource borrowed from another thread such as a handler granted by a server may never be released or otherwise complete its usage cycle the program may be experience a deadlocked or livelock; some distributed protocol that was previously initiated (e.g. voting, partaking in some transaction, etc.) may never complete; another thread waiting on a spinlock may spin the CPU forever in a tight loop; if a low-level invariant is broken, the program may crash in ugly low-level ways, or worst of all, it may return wrong answers and do the wrong thing to your system — which can conceivably cause death and/or loss of millions of dollars.
On the other hand, if you fail to interrupt the thread when it is failing, then it might keep running in a zombie state that eats all your CPU or holds onto critical resources (shared data structures, sheer amount of memory, file handles, etc.) that blocks the other computations from successfully making progress. Or its known failing state may lead to corruption of critical data. In this case too, costly or deadly failure may happen.
Therefore, in Scheme, as in most languages, at least at present, the limited solutions to providing an ersatz of Erlang-style robustness are as follow: 1- Do NOT allow for asynchronous killing at all at the Scheme level. Have only synchronous killing at the Scheme level. 2- Socially enforce a convention that all actors should regularly go back to the message loop, and that there should never be a deadlock, live lock, non-terminating computation or runaway code execution between two consecutive calls home to the message handling loop. 3- If some algorithm require indefinitely long computations, their implementation must maintain a discipline of "cooperative multitasking", like in the bad old days of the 1980s, whereby these long-lived computations will be specially modified to periodically "yield" execution and give the message loop process the opportunity to process any synchronous shutdown message while the program is in a stable state. 4- Consider Scheme as a replacement not for Erlang, but for the lower-level language in which the Erlang VM is implemented (i.e. C), that has to deal with all the ugly synchronization details, without being able to fully abstract over them. 5- Build further abstractions over this lower-level language, and stick to them by social convention. A regular Scheme cannot enforce these social conventions and prevent users from breaking the abstractions and reaching into the implementation details; however, Gerbil allows you to build and enforce a full abstraction for module, thanks to its Racket-like #lang feature, that impose global (rather than merely local) restrictions on what a module can express. 6- If you really want a group of actors that live and die together, put them in a same Operating System level process (and either use OS process groups to implement trees of related processes, or implement yourself that notion using some kind of supervisor process). Then you can kill and restart the entire process (or set of processes). Unlike Erlang processes or Gambit threads, It's heavy weight; but it works, and sometimes that's what's exactly needed. 7- In general, as much as possible, use pure functional style and/or restrict side-effects to local state that is private (not shared), thus reducing issues related to shared state for processes that use this style. However, because the Scheme implementation's runtime and the available libraries were never designed for asynchronous interrupts, their own use of shared resources can still cause catastrophic failures in case of asynchronous aborts.
This strategy of course works, but leads to code that is awkward, inefficient, not modular, tiresome and error-prone to write, impractical except at a small scale, and still fragile. It is not satisfactory to only provide fragile constructs that will explode if users fail to respect non-trivial coding conventions and maintain them as the software evolves. This issue really calls for some robust abstraction mechanism that will automatically enforce any invariant though coherent automated code generation rather than manual discipline. Well, at least, Scheme is not worst than any other random language. The only languages that stand out for their robustness are those based on the Erlang VM, BEAM, i.e. Erlang itself, Elixier, LFE, Efene, Joxa, and whatever Erlang flavor of the day.
Now, what I would really like is to enhance Gambit Scheme with basic mechanisms to really allow safe asynchronous killing of threads. I told vyzo and he opened issue https://github.com/gambit/gambit/issues/275 on asynchronous aborts. My first reflex was to think that if you somehow have a notion of pseudo-atomic code blocks and you can ensure that asynchronous signals are deferred until the end of current code block, then everything will be fine. Cleanup forms in "finally" clauses or dynamic-wind forms may have to be considered atomic, or at least start with interrupts disabled. But otherwise, it should be pretty much a straightforward extension of what the GVM already supports for the sake of e.g. garbage collection, right? Nope.
It actually takes a whole lot to make proper asynchronous interrupts work in presence of shared state. After thinking about the issue a bit more, I realized that it's actually the very same problem that plagued me for years, and that I have solved in theory my (incomplete) PhD thesis, but that still requires a practical implementation. And I also realized that my thesis has a solid argument why there is no shortcut to the complete solution proposed in my thesis, of a general protocol for declaring "observability" of computations.
Indeed, for each level of abstraction that you (or your users) care about, there will be high-level invariants on the shared state that, if broken, leave the entire program unable to make progress at that level of abstraction, even though the state may be perfectly fine at lower-levels of abstractions. Solving the problem at a low-level of abstraction can never be enough to solve the issue at higher levels of abstraction, that the lower-levels are only a means to support. Thus, you can never safely kill any thread in any existing language, with the exception of Erlang.
Yet, Erlang does it for all programs. And if you look carefully, you'll see that each and every programming language with preemptive user-level threads or a garbage collector supports pseudo-atomic blocks and properly deferred asynchronous signal delivery to suitable "safe points", so the invariants of its own virtual machine are enforced before a context switch may proceed without the asynchronous signal handler interfering with low-level implementation details of the language's virtual machine. In the case of Gambit, quite remarkably, asynchronous signal handling by the system is compatible since 2015 with migrating processes from one GVM to another, e.g. C to JS to PHP — by making sure the signal to migrate is only processed at safe points relative to the GVM.
To find a general solution to the issue, you must first step back and look at the bigger picture: software can be seen as a "semantic tower", where each layer is the implementation of some more abstract computation A using some more concrete computation C. For instance, your program implements a user abstraction U on top of your programming language abstraction P; the compiler you use implement this abstraction P in terms of a lower-level virtual machine V. Then a lower layer expresses V in terms of a low-level view O of the system as provided by the operating system. The operating system itself implements O in terms of the documented CPU and chipset semantics C. C may include microcode that realize the CPU abstraction in terms of a digital circuit D. D is implemented as transistors in terms of analog electrical circuits E. E is implemented in terms of quantum mechanics Q. Q is implemented by God in terms of his own digital physics computer a la Ed Fredkin. Many more abstraction levels may exist above, below, or in the middle, that were omitted in this list, yet may be added when observing the semantic tower from a wider point of view or with a finer resolution of details.
From this point of view, the issue of asynchronous signal handling is then that at each layer of implementation, a low-level asynchronous interrupt signals may be received at a safe point for the lower level of abstraction, but that the implementation may want to deliver a higher-level asynchronous signal, to be handled at a safe point for the higher level of abstraction. Each level of abstraction thus has its own notion of safe point, with its own restrictive invariants, that its implementation must express in terms of the lower abstraction's level of safe-point, using the language in which it is written, that is expressed in terms of that lower abstraction's state and its laxer invariants. The general architecture of this semantic tower must therefore support "lifting" the notion of safe point, so that a higher-level safe point may be recovered from a lower-level safe point. In my thesis I call the corresponding property of implementations that can lift this notion of safe-point "observability". The developer in charge of providing an abstraction level must make sure it can never be caught "with its pants down" (to reuse the metaphor by ITS hackers, as narrated by Alan Bawden in his great article on PCLSRing, an early documented instance of the notion of observability, in an 1960s operating system). And he must for that use on the lower-level system provided by the programming language he uses, that he may hopefully rely on itself never being caught with their pants down, but only observed in stable states.
Therefore, when an asynchronous signal is received for which a handler is registered at a given level of abstraction A, the system must somehow synchronize to a safe point for A before to run the handler, and in general this level may be higher than that of Gambit's virtual machine. Furthermore, in the case of aborting a thread, this level of abstraction is the highest at which this thread matters to anyone (user, or supervisor program that knows how to rebuild higher abstractions).
In simple cases, recovering a safe point for a level of abstraction A is simply a case of letting the code run, and checking at each safe point reached whether an interrupt was received that requires processing at that level of abstraction (or one below). But for many reasons may require to support less simple cases: there may be ongoing transactions that need to be rolled back (aborted) or rolled forward (eagerly completed, or maybe partially completed but with some clean stable state register that will cause a follow up transaction); the abstract state may be a composite of the states of several concurrent systems, that may each have to be stopped and synchronized to an observable state; performance may require shortcuts to be taken in the regular case that have to be compensated for when an interrupt is caught. In the most general case, whichever programmer is specifying the abstraction level A is himself using a programming language providing a more concrete level of abstraction C. When specifying a handler of asynchronous signals to recover a stable state at level A, the programmer necessarily needs to express it the language he is using, in terms of the state at level C. Therefore, for that handler to run and synchronize to a safe point for A, the platform should first be able not just to synchronize to a safe point for C, but also to let the handler observe (recover, extract, reconstruct, inspect) the state at level C. Now, since there may be an even higher abstraction level H on top of A, it is not enough to synchronize to a safe point for A, the platform must also support observing (recoving, extracting, reconstructing, inspecting) the state at the level A so the handler of the implementation of H with A can itself specify how to synchronize to a safe point for H and let yet higher levels of abstractions observe the state at level H.
Now, a naive understanding of "recovering the state at level A" can be expensive: you don't want to serialize the entire state of the virtual machine (potentially gigabytes of memory or more) every time you process an asynchronous interrupt handler. You want this recovery to be lazy, so only the bits of state actually required by the handler need to be partially reified at the required level of abstraction. A naive implementation of safe points would create a closure to express this recovery, at every safe point. A slightly less naive implementation would only create that closure *if* an interrupt was caught at that safe point. Therefore, the general protocol for a safe point is therefore to have some kind of special form (safe-point level state), where level is some kind of object identifying the level of abstraction of the safe point (if possible known at compile-time, usually implicit when discussing safe points of a well-identified layer of the semantic tower), and state is a form only evaluated when an interrupt is caught at said level, that permits recovery of the state at specified abstraction level, if possible lazily.
The compiler hopefully knows how to merge safe-points between levels of abstractions, so that tests for asynchronous interrupts at higher level safe-point and creation of corresponding higher-level state objects only happen if an asynchronous interrupt was already caught at the corresponding lower-level safe-point, yet wasn't handled already by a lower-level handler. An even better compiler would eliminate redundant consecutive safe point checking, so e.g. check points are only checked at the beginning of functions or loops (just like the implementation already does for its own lower-level checkpoints).
Now, it is not enough to have compiler support. The runtime library must also be written in a way that supports asynchronous interrupts, and the programming language must provide suitable abstractions. Notably, when allocating *any* kind of resource that an asynchronous interrupt may necessitate to release, the atomic operation with respect to interrupts should be not merely allocating the resource, but allocating it AND atomically binding some variable to it; only then may a "finally" clause properly release the resource without a leak should an asynchronous abort be received. (The "finally" clause will also handle synchronous exceptions or regular exit). Potentially long-running library functions, and especially higher-order functions, may also have their own issues with respect to declaring safe-points for higher levels of abstraction within the dynamic extent of their function call. When an abstraction level reexports such functionality from lower levels, it may have to subtly wrap this functionality in variants that suitably handle safe-points. And the compiler may have to be able to suitably optimize away most wrappers.
There is also the case when a thread receives a further asynchronous abort in the middle of processing an existing one; or when it gets stuck while executing cleanup forms in general. My understanding is that asynchronous aborts are specified with a target level of abstraction. By default, an abort signal (as a Unix kill -TERM) works at the highest level of abstraction that the programmer cares about, and should run all the cleanup forms. If the operator gets impatient, he may send signals with lower levels of target abstraction (down to a Unix kill -KILL), at which point levels of abstractions higher than the target level are invalidated, their cleanup forms are eschewed, and all linked processes at this level of abstraction are killed (and hopefully restarted by their supervisor). It is therefore possible to "lose" a layer of abstraction -- if there was a bug in the implementation of this layer of abstraction, at which point, well, that is exactly what "having a bug" means.
All in all, it's a lot of non-trivial work, especially since I need to modify the Gambit compiler to itself follow the protocol for the layers between Scheme and the GVM (it already follows it for the layers below the GVM, yay Marc!). But the result might be worth it, because, as I argue in my thesis (incomplete, but you can already read 141 pages worth of it), successfully enforcing this protocol unlocks an entire world of further cool features. I solved it all in theory. But since this is computing, not mathematics, theory is not enough and now I need to work on the implementation.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org Happiness is a journey, not a destination; happiness is to be found along the way not at the end of the road, for then the journey is over and it's too late. The time for happiness is today not tomorrow. — Paul H. Dunn _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On Sun, Jul 30, 2017 at 4:04 PM, Dimitris Vyzovitis vyzo@hackzen.org wrote:
Quite a few words in there! Very solid exposition of a difficult problem.
Thanks!
Some first thoughts regarding implementation: We can go quite aways with two primitives:
(thread-raise! thread obj) (begin-atomic body ...)
thread-raise! is a generalization of thread-abort! that asynchronously raises an exception in the first safe-point for the target thread. A safe point is defined when interrupts are enabled _and_ asynchronous exceptions are not masked by an atomic regions.
Atomic regions are demarcated with begin-atomic, which acts like begin only its equivalent to incrementing an per-processor atomic state counter for the dynamic extent of the body. When this counter is > 0, asynchronous exceptions are masked. When the counter is decremented back to zero, a pending asynchronous exception can be raised.
No, that's not quite right. Instead, I believe that if you want "interruptible" to be the default, you still need a notion of "abstraction levels", and have a (begin-atomic level) and an (end-atomic level) that are not necessarily in the same scope, with (safe-point level forms) being a bit like (begin (end-atomic level) (begin-atomic level)), and a level attached to your thread (as a parameter?) which will be used by default when delivering an asynchronous abort (but can be explicitly lowered if you want to kill -9 your thread https://www.youtube.com/watch?v=Fow7iUaKrq4 at which point the rest of your process WILL be left in a less-than-fully functional state). For advanced uses, you'd have something like (with-safe-point-handler [level lower-level (lambda (lower-level-state) higher-level-state-form)] ...forms...).
Wrt to dynamic winds, both the wind and unwind thunks must be considered atomic. Furthermore, if the wind thunk has been evaluated, then the unwind thunk should be guaranteed to be evaluated as well in the presence of asynchronous exceptions in the body thunk.
Agreed. Optionally, the dynamic-wind would have an abstraction level such that you eschew the forms if you're willing to wholly sacrifice the abstraction level and fall back to a level below it.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org None are more hopelessly enslaved than those who falsely believe they are free. — Johann Wolfgang von Goethe
On Sun, Jul 30, 2017 at 5:26 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Where does the old gambit+termite fit into your taxonomy?
My limited understanding is that the current code is based on termite, that never had asynchronous aborts. Asynchronous abort of user-land threads is HARD.
—♯ƒ • François-René ÐVB Rideau •Reflection&Cybernethics• http://fare.tunes.org A cuddle a day keeps the shrink away