With the purchase of my 8-core, shared-memory, SMP Opteron machine and Marc's possible interest (see http://mailman.iro.umontreal.ca/ pipermail/gambit-list/2006-January/000560.html) in extending Gambit's runtime to support SMP, three threads of thought (obsessions?) have converged. It would seem to be difficult to develop any one of them without the other two, and one of them involves Gambit development directly, so please forgive the limited relevance of this message to this list.
1. My current image processing research.
I'm not going to explain the details here, except to say that the problems are highly parallelizable, involves minimizing functions over high-dimensional spaces (say, 40 dimensions), and evaluating the function to be minimized can take a few minutes on a single Opteron core. We don't yet know a provably correct algorithm for the solution of the problem, so we're doing exploratory calculations, and we're still at the "get the right answer, see if that answer's interesting, and if so, see if we can speed up the algorithm to make it practical" stage. Each test calculation takes hours to days to complete. SMP would speed up the process tremendously.
2. SMP in Gambit.
'Nuff said.
3. A new framework for array processing, with applications to image processing.
People who develop new algorithms for image processing often start in Matlab because it can be interactive, it operates on entire matrices at a time, and it has a lot of built-in functions. The latter part is the killer app aspect of Matlab---if Matlab has the functions you want built in, then you're golden, the code runs fast, it's easy to program, etc.
If, however, you're developing new algorithms, rather than just trying to use a known algorithm, it's possible, or even likely, that Matlab won't have the built-in functions you need. Then you're reduced to "writing Fortran/C code in Matlab", writing loops that operate on one number/pixel/voxel/... at a time. Then you find out that, not only is it difficult to write code in this style, Matlab is incredibly slow executing code like that. So then the graduate student rewrites the code in C, which is even more difficult, but now it runs quickly, but it's nearly impossible to debug or change. (And it's usually the graduate student going this route, because they won't believe from the beginning that Matlab won't cut it, and then they think that the only way to get the code to run in 20 seconds instead of two hours is to write it in C.)
So, I want to fix this problem. My goal is a framework in Scheme for developing new algorithms in array/image processing that has the following properties:
A. The speed of programs developed with this framework are within an order of magnitude of the speed of similar programs written in C.
B. It's at least as easy to develop new algorithms in this framework as it is to work in Matlab when Matlab's built-in functions suffice.
I've been writing all my image processing algorithms in this evolving framework for at least five years now. In functional programming terms, the basic operations are various flavors of map, reduce, curry, and, when you're really in a pickle, for-each, on multi- dimensional arrays containing heterogeneous or homogeneous data types. (You use curry when you want to slice and dice multi- dimensional images in various ways, e.g., to view a three-dimensional image of a body in a different image plane than the one in which the body was scanned originally.) I'm working on an SRFI submission.
Now here's the convergence. Some people at Google just made a big deal about their map-reduce algorithms that they use to process multiple gigabytes of data over thousands of processors. To have this work, they need two things, which can be explained in SRFI-1 terms: first, in
(map f l1 l2 ...)
then it can't matter in what order f is applied to the elements of the lists, and, in fact, f has to be "thread safe", any parallel executions of f applied to the elements of the list must give the same answer, and second, in
(reduce op id l)
the same must be true of op, and op must be associative. (The property needed for "map" is more restrictive than the R5RS property for argument evaluation, which is that R5RS is allowed to assume that the arguments to a function end up with the same value no matter in what order they are computed, but they must be computed sequentially in some order, no parallelism, no threads, etc.) So you realize that if you want to parallelize multi-dimensional array versions of these functions, you need the same properties.
On the other hand, an array for-each, say,
(array-for-each f a1 a2 a3 ...)
must apply f in some fixed order, and sequentially. And you realize that either you supply versions of array-map and array-reduce that have the same property, or you expect that the times that array-map and array-reduce need these properties are so few and far between that you leave it to the users to cobble together what they need from array-for-each. (The only time I've needed this property so far is to read and write image files.)
So, there we go.
Without an SMP version of Gambit, I couldn't develop and test our new parallel image processing framework and speed up executing our new algorithms by a factor of 8 or so.
Without our image processing research applications, I wouldn't have anything with which to test SMP Gambit or my new array-processing framework.
And without the new image-processing framework, I doubt that I could easily develop fast, SMP-capable, image-processing algorithms for my research.
So, yes, "We want our SMP!"
Brad