On Sat, 8 Dec 2007, Bradley Lucier wrote:
In the scripting language/low-level language model that is popular these days in scientific computing, the "interface" between these languages is fairly fixed, as it is often difficult to achieve high performance in the scripting language or high flexibility in the low-level language. In Scheme, I can move that boundary just by choosing different implementation strategies for parts of the code.
Yes, that is right. It might even be an oft-neglected argument, but also, I suspect that many people are just not asking themselves any questions, and just assume that the project is gonna use multiple languages with C++ at the centre.
So nearly all the value in the system is in the high-level parts, being able to take an algorithm from a textbook or paper and translate it into code nearly verbatim (after you struggle to really understand the half-page algorithm ;-).
I'd say that a lot of languages aren't bad at direct translations from textbook algorithms. I'd even say that the prefix list notation of LISP is at a disadvantage because of how many books use the usual Pascal/Ada/C/Python mishmash kind of pseudo-code. For everything else, LISP is pretty much better, especially with a good macro system. C++ is often not that far behind, even though its macro system sucks.
The reason that the system is (nearly) as fast as one programmed in C or C++ is that almost all the floating-point operations in a multigrid method for solving a finite-element method for elliptic or parabolic PDEs, say, are in sparse-matrix--vector multiplication, and that operation is limited by memory bandwidth in either language. So the fact that the final assembly code for floating-point vector accesses in Gambit-C--generated code is about 1/2 the speed of that in C doesn't matter, we're always waiting for memory in either case.
If you want to beat C++ on this, you could look at how to stream the data from one component to the other so that they share the bandwidth: instead of doing several passes one after the other, start them in near-parallel. It might be easier to do this with some kind of coroutines (can't do this in C++), or not.
What I said may make more sense for sparse matrices than non-sparse, because non-sparse is usually divided in rows or columns rather than squares. In any case, you have to worry about the order in which the data is sent so that you don't have to buffer much so that you don't have to use too much RAM at a time.
But be warned, that still nowadays, cache RAM can be quite slower than just registers. I have made a system which streams at the level of cache RAM instead of registers, and it can't get any close in speed to a hand-written loop of a whole chain of operations compiled together. I'd need a runtime-compiler, but for C++, I have to forget it.
I haven't considered Scheme seriously so far because I assume that I will have trouble with garbage collectors. I already have garbage collector slowdowns in my current version, but I'd like to get rid of them, and I wouldn't switch to Scheme if I don't know how much I can configure or get rid of the GC, or otherwise make it run much more smoothly. I do realtime programming.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada