[gambit-list] gcc stage timing

Christian christian at pflanze.mine.nu
Mon Oct 16 18:52:22 EDT 2006


At 16:21 Uhr -0400 16.10.2006, Lang Martin wrote:
>quoth Christian:
>
>>  (Note that the "limit" is not actually limiting any calculation
>>  (assuming that futures do terminate); it's just making other futures
>>  wait until earlier ones are done. You'll want this when starting
>>  external tools, otherwise you'll make your machine's responsitivity
>>  suffer.)
>
>yeah, I'm not actually calling external processes. However, my
>instinct is to still have gambit catch the appropriate exceptions, and
>go until it runs out of memory or filehandles or processes, or
>whatever. Typically I set the limits from the outside with a ulimit
>command before I start the process, giving me a unified interface, and
>confidence that the limits will be respected. Also makes gambit
>graceful if you need to scale it down, and run in an evironment with
>few resources.
>
>From a unix perspective, I think external limits and graceful failure
>are the right thing to do.

Ok, but then you ignore portability. (I don't use Windows myself, but still.)

In our case (parallel gcc), it's problematic since running right 
below the process limit will make gcc fail when it tries to run 
subprocesses itself. Will you be able to deduce gcc's reason of 
failure from it's exit status or will you have to parse it's output? 
Will you patch it to make it take an option telling it to retry 
forking his own children? Maybe we need a new GNU system to implement 
this idea right everywhere?

The semaphore approach has at least the advantage of being efficient 
in the sense that new tasks are started as soon as older ones finish 
(no retry/polling needed, so no phases of inactivity).

It may not be straight-forward to deal with OS limits. Will you make 
sure that the scheme thread leading to an out of memory error 
terminates and takes all it's data with it to make place for the 
others to finish? Or will you hope you can get through by just 
stopping the thread and waiting for the others to finish? Will you 
signal that no new calculations may be started? Will the emergency 
memory pool you have hopefully reserved suffice to survive the rest? 
Or will you flush data to disk temporarily and purge it from the 
memory--but is this reasonable, isn't this circumventing the limit? 
(New term: "ulimit piercing".)

In the end, it's really just a question whether the limit should be 
in the OS or in the app. Lispers think "the lisp image is the 
system", and Gambit goes that route insofar as it implements such 
things as code reloading or threading. So it's reasonable to accept 
the limit is in this sub-OS, too. IIRC DJB's programs just quit on 
limit transgressions; only his "whole systems" (across processes) 
deal with failures gracefully (having stored intermediate results do 
disk). So the application is living in unix, unix is part of the 
application. But building an application completely "in the unix 
programming environment" is difficult. Even if you manage to pass 
complex data structures between processes, with data sharing (e.g. 
every cons cell is it's own file (how efficient does your OS handle 
this?), every cons is a tiny program (how efficient?..)), you'll have 
to implement some Lisp niceties like a garbage collector (for 
collecting your old unused files) yourself. If you don't, you can't 
just quit on failure and now have the new problem of dealing with 
partial failure inside an OS process (which is carrying multiple 
subsystem processes) while the whole process is being restricted by 
the OS. (I don't say it's impossible. But generally people don't seem 
to have found good solutions. Perl5 has the $^M variable as an 
emergency memory pool, but considering that this is not compiled in 
by default I guess almost noone is making use of it. Surely, for 
Gambit (which I consider better as a long-running subsystem) it would 
make more sense.)

DJB says the limit should be enforced by the OS because he trusts the 
OS. It makes sense to do that, since the OS is shared by everyone and 
it pays off making that part of the whole system secure (once for 
all). But maybe the Unix concepts have partially come into the years? 
Nowadays you don't want to write little C tools with pipes and stream 
parsing anymore, right?, but instead benefit from "higher level 
concepts" offered by "higher level languages (and systems?)". If 
you're using a virtual machine like the JVM, Parrot, or also Gambit 
(I consider this mostly being a virtual machine too), you're running 
a subsystem, and it follows right quick that such a subsystem isn't 
integrated 100% into the host system. (So the usual 2nd-best approach 
is to set up OS limits to reduce possible damage from a buggy 
application (or otherwise overloaded system), but code the 
application in a way that it should never exceed the OS limits.)

>  > The name |fn!| in
>>  (define (make-reporting-thread fn! . timeout) ..)
>>  looks funny: fn is normally used for "function", and (pure) functions
>>  don't have side effects. |proc| might be a better name. But actually
>>  I'm not sure what it does. (Well I see that it's return value isn't
>>  used at all.)
>
>That's it. it's passed to for-each, meaning it's "called for it's
>side-effects", as R5RS says.

OK then I'd use |proc| as name.

>  > What does make-lazy do? It can't just be an alternative
>>  implementation of a make-promise, since those would take a thunk, not
>>  an 1-ary function.
>
>ah, right, it does this:
>
>(define (make-lazy fn)
>   (define (me)
>     (delay
>       (call/cc
>        (lambda (exit)
>          (cons (fn (lambda () (exit '())))
>                (me))))))
>   (me))
>
>i.e., the fn creates the car part of a nearly infinite series, and can
>conditionally call the exit thunk to escape the cons & terminate the
>stream.

Heh, interesting approach; turning an imperative (mutating state) 
generator into a stream, right?

>That seemed like the right way to do it.

Actually seems to make sense if you've got an imperative generator. 
(But even on Gambit, call/cc costs a little bit more than a cons or 
two, so rewriting that into a straight functional stream generator 
may be a little bit more efficient.)

(BTW you should probably rename |fn| to something else here as well.)

>So, overall, it folds over the stream, creating a thread for each
>element which sends it's result to the mailbox of the reporting
>thread. The fold counts the number of elements in the stream, and
>assigns the reporter's specific field that total number. the reporter
>loops over it's mailbox until it's counted as high as it's specific
>field, and then it exits.
>
>the primoridal thread join!'s the reporter with a timeout, so that if
>a thread or two hang for a long period of time, I'll ignore them and
>quit anyway.

So the order of the values in the output is not the same as in the 
stream, right? So it's not a "parallel map" really.

Christian.



More information about the Gambit-list mailing list