At 16:21 Uhr -0400 16.10.2006, Lang Martin wrote:
quoth Christian:
(Note that the "limit" is not actually limiting any calculation (assuming that futures do terminate); it's just making other futures wait until earlier ones are done. You'll want this when starting external tools, otherwise you'll make your machine's responsitivity suffer.)
yeah, I'm not actually calling external processes. However, my instinct is to still have gambit catch the appropriate exceptions, and go until it runs out of memory or filehandles or processes, or whatever. Typically I set the limits from the outside with a ulimit command before I start the process, giving me a unified interface, and confidence that the limits will be respected. Also makes gambit graceful if you need to scale it down, and run in an evironment with few resources.
From a unix perspective, I think external limits and graceful failure are the right thing to do.
Ok, but then you ignore portability. (I don't use Windows myself, but still.)
In our case (parallel gcc), it's problematic since running right below the process limit will make gcc fail when it tries to run subprocesses itself. Will you be able to deduce gcc's reason of failure from it's exit status or will you have to parse it's output? Will you patch it to make it take an option telling it to retry forking his own children? Maybe we need a new GNU system to implement this idea right everywhere?
The semaphore approach has at least the advantage of being efficient in the sense that new tasks are started as soon as older ones finish (no retry/polling needed, so no phases of inactivity).
It may not be straight-forward to deal with OS limits. Will you make sure that the scheme thread leading to an out of memory error terminates and takes all it's data with it to make place for the others to finish? Or will you hope you can get through by just stopping the thread and waiting for the others to finish? Will you signal that no new calculations may be started? Will the emergency memory pool you have hopefully reserved suffice to survive the rest? Or will you flush data to disk temporarily and purge it from the memory--but is this reasonable, isn't this circumventing the limit? (New term: "ulimit piercing".)
In the end, it's really just a question whether the limit should be in the OS or in the app. Lispers think "the lisp image is the system", and Gambit goes that route insofar as it implements such things as code reloading or threading. So it's reasonable to accept the limit is in this sub-OS, too. IIRC DJB's programs just quit on limit transgressions; only his "whole systems" (across processes) deal with failures gracefully (having stored intermediate results do disk). So the application is living in unix, unix is part of the application. But building an application completely "in the unix programming environment" is difficult. Even if you manage to pass complex data structures between processes, with data sharing (e.g. every cons cell is it's own file (how efficient does your OS handle this?), every cons is a tiny program (how efficient?..)), you'll have to implement some Lisp niceties like a garbage collector (for collecting your old unused files) yourself. If you don't, you can't just quit on failure and now have the new problem of dealing with partial failure inside an OS process (which is carrying multiple subsystem processes) while the whole process is being restricted by the OS. (I don't say it's impossible. But generally people don't seem to have found good solutions. Perl5 has the $^M variable as an emergency memory pool, but considering that this is not compiled in by default I guess almost noone is making use of it. Surely, for Gambit (which I consider better as a long-running subsystem) it would make more sense.)
DJB says the limit should be enforced by the OS because he trusts the OS. It makes sense to do that, since the OS is shared by everyone and it pays off making that part of the whole system secure (once for all). But maybe the Unix concepts have partially come into the years? Nowadays you don't want to write little C tools with pipes and stream parsing anymore, right?, but instead benefit from "higher level concepts" offered by "higher level languages (and systems?)". If you're using a virtual machine like the JVM, Parrot, or also Gambit (I consider this mostly being a virtual machine too), you're running a subsystem, and it follows right quick that such a subsystem isn't integrated 100% into the host system. (So the usual 2nd-best approach is to set up OS limits to reduce possible damage from a buggy application (or otherwise overloaded system), but code the application in a way that it should never exceed the OS limits.)
The name |fn!| in (define (make-reporting-thread fn! . timeout) ..) looks funny: fn is normally used for "function", and (pure) functions don't have side effects. |proc| might be a better name. But actually I'm not sure what it does. (Well I see that it's return value isn't used at all.)
That's it. it's passed to for-each, meaning it's "called for it's side-effects", as R5RS says.
OK then I'd use |proc| as name.
What does make-lazy do? It can't just be an alternative implementation of a make-promise, since those would take a thunk, not an 1-ary function.
ah, right, it does this:
(define (make-lazy fn) (define (me) (delay (call/cc (lambda (exit) (cons (fn (lambda () (exit '()))) (me)))))) (me))
i.e., the fn creates the car part of a nearly infinite series, and can conditionally call the exit thunk to escape the cons & terminate the stream.
Heh, interesting approach; turning an imperative (mutating state) generator into a stream, right?
That seemed like the right way to do it.
Actually seems to make sense if you've got an imperative generator. (But even on Gambit, call/cc costs a little bit more than a cons or two, so rewriting that into a straight functional stream generator may be a little bit more efficient.)
(BTW you should probably rename |fn| to something else here as well.)
So, overall, it folds over the stream, creating a thread for each element which sends it's result to the mailbox of the reporting thread. The fold counts the number of elements in the stream, and assigns the reporter's specific field that total number. the reporter loops over it's mailbox until it's counted as high as it's specific field, and then it exits.
the primoridal thread join!'s the reporter with a timeout, so that if a thread or two hang for a long period of time, I'll ignore them and quit anyway.
So the order of the values in the output is not the same as in the stream, right? So it's not a "parallel map" really.
Christian.
Afficher les réponses par date
quoth Christian:
In our case (parallel gcc), it's problematic since running right below the process limit will make gcc fail when it tries to run subprocesses itself.
quite.
Will you be able to deduce gcc's reason of failure from it's exit status or will you have to parse it's output? Will you patch it to make it take an option telling it to retry forking his own children? Maybe we need a new GNU system to implement this idea right everywhere?
well, it'd be nice. I get your point, but that's exactly what you're doing, right? deducing gcc's limit from it's failure, and tweaking a parameter until you've manually constrained scheme to the point where it runs correctly. It's a fine solution for your problem, but it would scale better (up & down) with a restarting kind of approach.
The semaphore approach has at least the advantage of being efficient in the sense that new tasks are started as soon as older ones finish (no retry/polling needed, so no phases of inactivity).
that's true.
In the end, it's really just a question whether the limit should be in the OS or in the app. Lispers think "the lisp image is the system", and Gambit goes that route insofar as it implements such things as code reloading or threading. So it's reasonable to accept the limit is in this sub-OS, too.
That makes sense. I'll see where I get with my train of thought on this, and keep an eye on your objections. I'm still in the midst of moving from DJB-style systems to a lisp style, so it'll probably take a few attempts to reconcile what I want.
Heh, interesting approach; turning an imperative (mutating state) generator into a stream, right?
Correct. The generator in my program makes an SQL call. I like the metaphor, and I've used it a few times now for ports, etc.
Actually seems to make sense if you've got an imperative generator. (But even on Gambit, call/cc costs a little bit more than a cons or two, so rewriting that into a straight functional stream generator may be a little bit more efficient.)
(BTW you should probably rename |fn| to something else here as well.)
So, something like this:
(define (make-lazy proc) (letrec ((nothing (gensym)) (exit (lambda () nothing)) (me (lambda (proc) (let ((value (proc exit))) (if (eqv? value nothing) '() (cons value (delay (me)))))) )) (delay (me))))
That's just the one call to gensym, which vaguely seems like the most expensive part of doing it this way, and should be as correct. Could write a generator to break it, but I don't suppose it'd happen accidently.
So the order of the values in the output is not the same as in the stream, right? So it's not a "parallel map" really.
Indeed. Sorry about that, I should have mentioned it. The endpoint of all this is a for-each loop that inserts results through mutation. It's SQL, again. So, for my purposes, I wanted it to send back results in any order, but as quickly as possible.
Thanks for the feedback, and sorry for getting off-topic in this thread.
Lang