Marc:
About a parallel bignum library ...
I think it depends on what we want to accomplish with bignums in Gambit.
When we started, we wanted to use Jon L White's ideas to design a fully portable, relatively efficient while still using naive algorithms, bignum implementation for 32- and 64-bit big- and little-endian machines, all written in Scheme except for a small number of word-level primitives. So we did that.
Then we saw that Karatsuba's algorithm for multiplication is very easy to implement, and we were off to the races.
In the end, we used a relatively novel algorithm for large multiplication (but not too large, good up to about a billion bits).
Given this multiplication algorithm, we added a novel GCD, state-of-the art sqrt (following Zimmermann), and a division based on the usual trick of computing the inverse and multiplying (which helped with conversion of large numbers to strings).
For a while for a few operations for a certain range of size of inputs Gambit was faster than GMP, but this has not been true for a long time. GMP is now faster than Gambit in everything. Much of the low-level routines in GMP are written in assembly code, tuned for various Intel, AMD, and Arm processors.
So now that we might have parallelism, what should we do?
If you want a test-bed for the parallel runtime, bignum arithmetic would be a possibility. But I don't see Gambit beating GMP for any reasonable range of argument sizes and choice of operations.
Other application areas might be just as good a testbed for parallelism.
What are our goals?
Brad