Brad,

I can clearly see times when it would be preferential to distribute the execution of multiplications, divisions (and square roots) to all Gambit processors.

Maybe this is particularly relevant in places where the numerator and denominator within fractionals, are very big, e.g. (/ a b) where a and b are both the result of (/ (random-integer (expt 10 30)) (random-integer (expt 10 25)) or higher exponents than that.

The point would be that a program sometimes not can anticipate when it will run into such heavy operations, however when it does, generally but not always, a user is waiting and it's preferable to parallellize it.

Could some kind of parallellize-math? setting be introduced, with what scope, thread-local or processor-local or global?

By the way, how many cores could the fractional above be spread across, and approx what speedup over serial execution could probably be attained?

And what about the same for the fractional but with the exponents doubled in size to 60 and 50?

This is not a super high priority but thanks for bringing it up.

Thanks,
Adam

2016-11-28 4:08 GMT+08:00 Bradley Lucier <lucier@math.purdue.edu>:
Marc:

Because of accumulated roundoff error in IEEE floating-point arithmetic,
Gambit's FFT-based bignum multiplication algorithm is limited to
multiplying two numbers each of which has <= 2^{29} bits (on a 64-bit
machine).

Multiplying larger numbers falls back on Karatsuba multiplication, which
cuts the size of the multiplicands roughly in half in each step, until
you get back down to numbers of <= 2^{29} bits again, in which case FFT
is used again.

So, in fact, to compute a billion decimal digits of pi, we rely on
Karatsuba to do the largest multiplications (and divisions and square
roots, since they all reduce to multiplication).

On a machine with enough memory, all the "sub-multiplications" in
Karatsuba can be done in parallel.  I haven't figured out how much
memory is needed; it takes a few GB of temporary RAM to multiply the
largest numbers with the FFT algorithm.

I believe it took about 16GB total to calculate a billion decimal digits
of pi using the current serial algorithm.

Once parallel gambit is usable, computing pi to a billion (or even 10
billion, who knows) digits by parallelizing the larger Karatsuba
multiplications might be a good test of the system.

Brad
_______________________________________________
Gambit-list mailing list
Gambit-list@iro.umontreal.ca
https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list