With the purchase of my 8-core, shared-memory, SMP Opteron machine and Marc's possible interest (see http://mailman.iro.umontreal.ca/ pipermail/gambit-list/2006-January/000560.html) in extending Gambit's runtime to support SMP, three threads of thought (obsessions?) have converged. It would seem to be difficult to develop any one of them without the other two, and one of them involves Gambit development directly, so please forgive the limited relevance of this message to this list.
1. My current image processing research.
I'm not going to explain the details here, except to say that the problems are highly parallelizable, involves minimizing functions over high-dimensional spaces (say, 40 dimensions), and evaluating the function to be minimized can take a few minutes on a single Opteron core. We don't yet know a provably correct algorithm for the solution of the problem, so we're doing exploratory calculations, and we're still at the "get the right answer, see if that answer's interesting, and if so, see if we can speed up the algorithm to make it practical" stage. Each test calculation takes hours to days to complete. SMP would speed up the process tremendously.
2. SMP in Gambit.
'Nuff said.
3. A new framework for array processing, with applications to image processing.
People who develop new algorithms for image processing often start in Matlab because it can be interactive, it operates on entire matrices at a time, and it has a lot of built-in functions. The latter part is the killer app aspect of Matlab---if Matlab has the functions you want built in, then you're golden, the code runs fast, it's easy to program, etc.
If, however, you're developing new algorithms, rather than just trying to use a known algorithm, it's possible, or even likely, that Matlab won't have the built-in functions you need. Then you're reduced to "writing Fortran/C code in Matlab", writing loops that operate on one number/pixel/voxel/... at a time. Then you find out that, not only is it difficult to write code in this style, Matlab is incredibly slow executing code like that. So then the graduate student rewrites the code in C, which is even more difficult, but now it runs quickly, but it's nearly impossible to debug or change. (And it's usually the graduate student going this route, because they won't believe from the beginning that Matlab won't cut it, and then they think that the only way to get the code to run in 20 seconds instead of two hours is to write it in C.)
So, I want to fix this problem. My goal is a framework in Scheme for developing new algorithms in array/image processing that has the following properties:
A. The speed of programs developed with this framework are within an order of magnitude of the speed of similar programs written in C.
B. It's at least as easy to develop new algorithms in this framework as it is to work in Matlab when Matlab's built-in functions suffice.
I've been writing all my image processing algorithms in this evolving framework for at least five years now. In functional programming terms, the basic operations are various flavors of map, reduce, curry, and, when you're really in a pickle, for-each, on multi- dimensional arrays containing heterogeneous or homogeneous data types. (You use curry when you want to slice and dice multi- dimensional images in various ways, e.g., to view a three-dimensional image of a body in a different image plane than the one in which the body was scanned originally.) I'm working on an SRFI submission.
Now here's the convergence. Some people at Google just made a big deal about their map-reduce algorithms that they use to process multiple gigabytes of data over thousands of processors. To have this work, they need two things, which can be explained in SRFI-1 terms: first, in
(map f l1 l2 ...)
then it can't matter in what order f is applied to the elements of the lists, and, in fact, f has to be "thread safe", any parallel executions of f applied to the elements of the list must give the same answer, and second, in
(reduce op id l)
the same must be true of op, and op must be associative. (The property needed for "map" is more restrictive than the R5RS property for argument evaluation, which is that R5RS is allowed to assume that the arguments to a function end up with the same value no matter in what order they are computed, but they must be computed sequentially in some order, no parallelism, no threads, etc.) So you realize that if you want to parallelize multi-dimensional array versions of these functions, you need the same properties.
On the other hand, an array for-each, say,
(array-for-each f a1 a2 a3 ...)
must apply f in some fixed order, and sequentially. And you realize that either you supply versions of array-map and array-reduce that have the same property, or you expect that the times that array-map and array-reduce need these properties are so few and far between that you leave it to the users to cobble together what they need from array-for-each. (The only time I've needed this property so far is to read and write image files.)
So, there we go.
Without an SMP version of Gambit, I couldn't develop and test our new parallel image processing framework and speed up executing our new algorithms by a factor of 8 or so.
Without our image processing research applications, I wouldn't have anything with which to test SMP Gambit or my new array-processing framework.
And without the new image-processing framework, I doubt that I could easily develop fast, SMP-capable, image-processing algorithms for my research.
So, yes, "We want our SMP!"
Brad
Afficher les réponses par date
On 14-Jan-06, at 4:24 PM, Bradley Lucier wrote:
So, yes, "We want our SMP!"
Brad
SMP is hard. Several things have to be implemented:
1) Memory management needs a facelift. If there are several processors, you probably need each one to allocate in a (small) local heap, to avoid having to enter a critical section on each allocation. Garbage collection needs to be parallel, otherwise you won't get full benefit from your parallel hardware. Given that many things related to memory management have to change to get this working, its probably a good idea to go to the (relatively small) extra trouble of making the garbage collector incremental (so that you don't have to barrier-sync the processors before the GC can start). All of this **will** slow down object access, allocation and garbage collection speed (per processor), I estimate the overhead will be 20-50% on memory intensive programs.
2) The runtime system has to be made OS-thread-safe. This means that OS mutexes are used to implement critical-sections. This may hurt performance (currently several critical-sections are implemented by simply disabling generation of interrupt checking code, or by knowing that C code cannot be interrupted by other threads).
3) The thread model has to be extended. My idea currently is to support two types of threads: OS threads, and lightweight threads (implemented with continuations). The runtime system has basically no notion of physical processor, it only knows about OS threads. Each OS thread defines an execution context for Scheme (think of it as a virtual machine) with its own local heap and lightweight thread scheduler. An OS thread can access objects allocated in some other OS thread's local heap. When the system starts up the runtime system starts as many OS threads as there are physical processors. The OS threads can steal lightweight threads from other OS threads in order to balance the work. Supporting lightweight thread priorities will be a real challenge in this model (unless the lightweight thread scheduler is centralized, but this will be a performance bottleneck).
I have started looking into this and even have changed a few things in gambit.h to accomodate this model, but it sounds like several months of delicate implementation work... your 8-way opteron may be outdated when I'm finally done!
Marc
On Jan 14, 2006, at 4:35 PM, Marc Feeley wrote:
your 8-way opteron may be outdated when I'm finally done!
Ah, it'll be like my 4-way opteron I bought last spring, on which I can certainly still get good use. And we'll be applying for funds for a 16-32--way machine, for sure, while you're working on it!
Brad