Marc:
Because of accumulated roundoff error in IEEE floating-point arithmetic, Gambit's FFT-based bignum multiplication algorithm is limited to multiplying two numbers each of which has <= 2^{29} bits (on a 64-bit machine).
Multiplying larger numbers falls back on Karatsuba multiplication, which cuts the size of the multiplicands roughly in half in each step, until you get back down to numbers of <= 2^{29} bits again, in which case FFT is used again.
So, in fact, to compute a billion decimal digits of pi, we rely on Karatsuba to do the largest multiplications (and divisions and square roots, since they all reduce to multiplication).
On a machine with enough memory, all the "sub-multiplications" in Karatsuba can be done in parallel. I haven't figured out how much memory is needed; it takes a few GB of temporary RAM to multiply the largest numbers with the FFT algorithm.
I believe it took about 16GB total to calculate a billion decimal digits of pi using the current serial algorithm.
Once parallel gambit is usable, computing pi to a billion (or even 10 billion, who knows) digits by parallelizing the larger Karatsuba multiplications might be a good test of the system.
Brad
Afficher les réponses par date
Brad,
I can clearly see times when it would be preferential to distribute the execution of multiplications, divisions (and square roots) to all Gambit processors.
Maybe this is particularly relevant in places where the numerator and denominator within fractionals, are very big, e.g. (/ a b) where a and b are both the result of (/ (random-integer (expt 10 30)) (random-integer (expt 10 25)) or higher exponents than that.
The point would be that a program sometimes not can anticipate when it will run into such heavy operations, however when it does, generally but not always, a user is waiting and it's preferable to parallellize it.
Could some kind of parallellize-math? setting be introduced, with what scope, thread-local or processor-local or global?
By the way, how many cores could the fractional above be spread across, and approx what speedup over serial execution could probably be attained?
And what about the same for the fractional but with the exponents doubled in size to 60 and 50?
This is not a super high priority but thanks for bringing it up.
Thanks, Adam
2016-11-28 4:08 GMT+08:00 Bradley Lucier lucier@math.purdue.edu:
Marc:
Because of accumulated roundoff error in IEEE floating-point arithmetic, Gambit's FFT-based bignum multiplication algorithm is limited to multiplying two numbers each of which has <= 2^{29} bits (on a 64-bit machine).
Multiplying larger numbers falls back on Karatsuba multiplication, which cuts the size of the multiplicands roughly in half in each step, until you get back down to numbers of <= 2^{29} bits again, in which case FFT is used again.
So, in fact, to compute a billion decimal digits of pi, we rely on Karatsuba to do the largest multiplications (and divisions and square roots, since they all reduce to multiplication).
On a machine with enough memory, all the "sub-multiplications" in Karatsuba can be done in parallel. I haven't figured out how much memory is needed; it takes a few GB of temporary RAM to multiply the largest numbers with the FFT algorithm.
I believe it took about 16GB total to calculate a billion decimal digits of pi using the current serial algorithm.
Once parallel gambit is usable, computing pi to a billion (or even 10 billion, who knows) digits by parallelizing the larger Karatsuba multiplications might be a good test of the system.
Brad _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On 11/27/2016 07:02 PM, Adam wrote:
Maybe this is particularly relevant in places where the numerator and denominator within fractionals, are very big, e.g. (/ a b) where a and b are both the result of (/ (random-integer (expt 10 30)) (random-integer (expt 10 25)) or higher exponents than that.
Numbers of this size aren't really "big" for bignum purposes. For example, we can find the bit length of a random integer < 10^30:
(integer-length (random-integer (expt 10 30)))
100
So that random integer would fit into two 64-bit words (plus a header word) in a bignum. It would hurt to try to parallelize things at this level.
For really large multiplications/divisions/square roots (with results with K > 10^9 bits), when we use Karatsuba multiplication there are three multiplications of size K/2 bits, and this is recursive, so if K/2 is again too big for our FFT routine, we'd get 9 multiplications of size K/4, or 27 multiplications of size K/8, etc., and we know that each of these multiplications would take quite a few operations themselves.
This part would be easy to code.
Brad
2016-11-29 0:21 GMT+08:00 Bradley Lucier lucier@math.purdue.edu:
On 11/27/2016 07:02 PM, Adam wrote:
Maybe this is particularly relevant in places where the numerator and denominator within fractionals, are very big, e.g. (/ a b) where a and b are both the result of (/ (random-integer (expt 10 30)) (random-integer (expt 10 25)) or higher exponents than that.
Numbers of this size aren't really "big" for bignum purposes. For example, we can find the bit length of a random integer < 10^30:
(integer-length (random-integer (expt 10 30)))
100
So that random integer would fit into two 64-bit words (plus a header word) in a bignum. It would hurt to try to parallelize things at this level.
For really large multiplications/divisions/square roots (with results with K > 10^9 bits), when we use Karatsuba multiplication there are three multiplications of size K/2 bits, and this is recursive, so if K/2 is again too big for our FFT routine, we'd get 9 multiplications of size K/4, or 27 multiplications of size K/8, etc., and we know that each of these multiplications would take quite a few operations themselves.
Wait.. around what complexity of fractionals did you say parallellization of mul/div/sqrts starts becoming worth it?
This part would be easy to code.
Ok cool. :)
Talking about bignum performance, what about addition and subtraction. How much of the energy is spent doing the actual add/sub and how much is spent doing the least-common-denominator calculation that's done on each calculation?
On 12/01/2016 09:43 PM, Adam wrote:
Wait.. around what complexity of fractionals did you say parallellization of mul/div/sqrts starts becoming worth it?
I didn't say, but around 10s or 100s of thousands of bits maybe.
Talking about bignum performance, what about addition and subtraction. How much of the energy is spent doing the actual add/sub and how much is spent doing the least-common-denominator calculation that's done on each calculation?
Most of it, usually by far, is in the gcd calculations, except in special cases.
Brad
Yes parallelizing Karatsuba multiplication should be very easy. That will make an interesting experiment when the multiple threaded VM configuration is stable. I expect very close to linear speedup given the independent sub-multiplications.
Marc
On Nov 27, 2016, at 3:08 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
Because of accumulated roundoff error in IEEE floating-point arithmetic, Gambit's FFT-based bignum multiplication algorithm is limited to multiplying two numbers each of which has <= 2^{29} bits (on a 64-bit machine).
Multiplying larger numbers falls back on Karatsuba multiplication, which cuts the size of the multiplicands roughly in half in each step, until you get back down to numbers of <= 2^{29} bits again, in which case FFT is used again.
So, in fact, to compute a billion decimal digits of pi, we rely on Karatsuba to do the largest multiplications (and divisions and square roots, since they all reduce to multiplication).
On a machine with enough memory, all the "sub-multiplications" in Karatsuba can be done in parallel. I haven't figured out how much memory is needed; it takes a few GB of temporary RAM to multiply the largest numbers with the FFT algorithm.
I believe it took about 16GB total to calculate a billion decimal digits of pi using the current serial algorithm.
Once parallel gambit is usable, computing pi to a billion (or even 10 billion, who knows) digits by parallelizing the larger Karatsuba multiplications might be a good test of the system.
Brad
On 11/28/2016 10:56 AM, Marc Feeley wrote:
Yes parallelizing Karatsuba multiplication should be very easy. That will make an interesting experiment when the multiple threaded VM configuration is stable. I expect very close to linear speedup given the independent sub-multiplications.
The Chudnovsky algorithm for $\pi$ is implemented using binary splitting of a series, and the independent sub-series can also be computed in parallel and then combined. This parallelism can be exploited even for not-so-large numbers of digits.
You get about 14 digits accuracy per term in the series, so for a billion digits you'd need about 70 million terms.
It'd be interesting to see how much overhead one incurs in distributing this computation to several processes. I imagine that processing 10,000 to 100,000 terms in a single process might be a good cutoff point, but we'd have to see.
Brad
That will be a perfect use case for the multiple-threaded VM. Can you write a parallel version of the algorithm, using thread-start! and thread-join! to manage the parallelism? You can test it on a single-threaded VM and then I can give it a try here on a 64 processor machine.
Marc
On Dec 9, 2016, at 11:21 AM, Bradley Lucier lucier@math.purdue.edu wrote:
On 11/28/2016 10:56 AM, Marc Feeley wrote:
Yes parallelizing Karatsuba multiplication should be very easy. That will make an interesting experiment when the multiple threaded VM configuration is stable. I expect very close to linear speedup given the independent sub-multiplications.
The Chudnovsky algorithm for $\pi$ is implemented using binary splitting of a series, and the independent sub-series can also be computed in parallel and then combined. This parallelism can be exploited even for not-so-large numbers of digits.
You get about 14 digits accuracy per term in the series, so for a billion digits you'd need about 70 million terms.
It'd be interesting to see how much overhead one incurs in distributing this computation to several processes. I imagine that processing 10,000 to 100,000 terms in a single process might be a good cutoff point, but we'd have to see.
Brad
On 12/11/2016 08:29 AM, Marc Feeley wrote:
That will be a perfect use case for the multiple-threaded VM.
That's why I told you about it, to tempt you ;-).
Can you write a parallel version of the algorithm, using thread-start! and thread-join! to manage the parallelism? You can test it on a single-threaded VM and then I can give it a try here on a 64 processor machine.
You want me to do the work and you to have the fun!
More seriously, I just looked through the thread section of the Gambit manual and I do not yet understand the thread model that you've built into Gambit. So I don't know how to do what you want.
I'm attaching the single-threaded code. It has the lines
(let* ((mid (quotient (+ a b) 2)) (gpq1 (ch-split a mid)) ;<<<<==== (gpq2 (ch-split mid b)) ;<<<<==== (g1 (car gpq1)) (p1 (cadr gpq1)) (q1 (caddr gpq1)) (g2 (car gpq2)) (p2 (cadr gpq2)) (q2 (caddr gpq2)))
The two calls to ch-split can be made in parallel.
I recommend introducing a parameter M and running those two calls in parallel if $b-a>M$. M is the number of terms to be computed in the partial series at that point; M of the order of 10,000 or so should work.
The resulting code might be a good extended example for the manual.
Sorry I can't be more help.
Brad
PS: By the way, a partial run in the interpreter on my linux box at home yields
firefly:~/programs/gambit/gambiteer> gsi chud2 Chudnovsky's algorithm using binary splitting in Gambit Scheme: digits 10, CPU time: 0.. Last 5 digits 26535. Chudnovsky's algorithm using binary splitting in Gambit Scheme: digits 100, CPU time: 0.. Last 5 digits 70679. Chudnovsky's algorithm using binary splitting in Gambit Scheme: digits 1000, CPU time: 0.. Last 5 digits 1989. Chudnovsky's algorithm using binary splitting in Gambit Scheme: digits 10000, CPU time: .012. Last 5 digits 75678. Chudnovsky's algorithm using binary splitting in Gambit Scheme: digits 100000, CPU time: .216. Last 5 digits 24646. Chudnovsky's algorithm using binary splitting in Gambit Scheme: digits 1000000, CPU time: 3.0439999999999996. Last 5 digits 58151. Chudnovsky's algorithm using binary splitting in Gambit Scheme: digits 10000000, CPU time: 47.112. Last 5 digits 55897.
On 12/11/2016 01:59 PM, Bradley Lucier wrote:
More seriously, I just looked through the thread section of the Gambit manual and I do not yet understand the thread model that you've built into Gambit. So I don't know how to do what you want.
Perhaps someone on the list can hack up something.
Brad
It’s a simple modification of your code:
(let* ((mid (quotient (+ a b) 2)) (gpq1-thread (thread-start! (make-thread (lambda () (ch-split a mid))))) ;; modified (gpq2 (ch-split mid b)) (gpq1 (thread-join! gpq1-thread)) ;; added (g1 (car gpq1)) (p1 (cadr gpq1)) (q1 (caddr gpq1)) (g2 (car gpq2)) (p2 (cadr gpq2)) (q2 (caddr gpq2)))
Marc
On Dec 11, 2016, at 3:59 PM, Bradley Lucier lucier@math.purdue.edu wrote:
On 12/11/2016 01:59 PM, Bradley Lucier wrote:
More seriously, I just looked through the thread section of the Gambit manual and I do not yet understand the thread model that you've built into Gambit. So I don't know how to do what you want.
Perhaps someone on the list can hack up something.
Brad
Not yet… I’m currently rewriting the scheduler for SMP… so you’ll have to wait.
Marc
On Dec 11, 2016, at 9:53 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Is there a way for me to try to run it in parallel?
Brad
<chud-parallel.scm>
On Dec 11, 2016, at 9:53 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Is there a way for me to try to run it in parallel?
Brad
<chud-parallel.scm>
I tried your program on my working copy of Gambit (rather brittle, but good enough to run simple tests like this). I computed 1 million digits of pi. On 1 processor it takes 15 seconds real-time, and on 7 processors it takes 9 seconds. Not a great improvement.
When I look at the activity log (attached below) I see that the program is not very parallel. It seems threads are only generated in the main part of the computation, between 2000 ms and 5000 ms into the execution. When I review the source code, I think the first 2000 ms are used to compute sequentially the square root of a 2 million digit bignum, and after the 5000 ms mark, the program computes sequentially (quotient (* p ch-C sqrt-C) (* ch-D (+ q (* p ch-A)))) with large bignums.
So to improve the performance, it would be necessary to improve the parallelism in those two parts. The square root could probably be done concurrently with the main computation (for a small number of processors, otherwise a parallel square root algorithm needs to be devised). The tail part of the computation might be improved by parallel algorithms for multiplication and division.
Marc
On 12/12/2016 04:39 PM, Marc Feeley wrote:
On Dec 11, 2016, at 9:53 PM, Bradley Lucier <lucier@math.purdue.edu mailto:lucier@math.purdue.edu> wrote:
Is there a way for me to try to run it in parallel?
Brad
<chud-parallel.scm>
I tried your program on my working copy of Gambit (rather brittle, but good enough to run simple tests like this).
Thanks, it looks very interesting!
When I review the source code, I think the first 2000 ms are used to compute sequentially the square root of a 2 million digit bignum, and after the 5000 ms mark, the program computes sequentially (quotient (* p ch-C sqrt-C) (* ch-D (+ q (* p ch-A)))) with large bignums.
So to improve the performance, it would be necessary to improve the parallelism in those two parts.
Absolutely.
The square root could probably be done concurrently with the main computation (for a small number of processors, otherwise a parallel square root algorithm needs to be devised).
Yes, the computation to the square root should be carried out in parallel with the computation of gpg.
The tail part of the computation might be improved by parallel algorithms for multiplication and division.
The two arguments to
(quotient (* p ch-C sqrt-C) (* ch-D (+ q (* p ch-A)))))))
can be computed in parallel.
I appreciate you trying this experiment. The activity log is a pretty good tool!
Brad
I’m thinking it would be nice if _num.scm was improved to implement parallel algorithms when a configuration option is used. This could be based on the “future” and “touch” parallelism constructs. Just a quick reminder
(future expr)
conceptually spawns a new thread that computes expr and returns a representative of the result (a “placeholder”), and
(touch placeholder)
waits for the result to be available and returns the result. So the idiom
(let* ((a (future EXPR1)) (b EXPR2)) (combine (touch a) b))
will compute EXPR1 and EXPR2 in parallel and call “combine” when the two results are available.
These forms can be implemented easily with Gambit threads:
(define-macro (future expr) `(thread-start! (make-thread (lambda () ,expr))))
(define (touch x) (thread-join! x))
Since most parallel fork/join algorithms will probably want to fork threads for numbers of a certain size, it is interesting to consider a “conditional” future construct that takes a condition as argument:
(define-macro (cond-future test expr) `(if ,test (thread-start! (make-thread (lambda () ,expr))) ,expr))
(define (cond-touch x) (if (thread? x) (thread-join! x) x))
That way a parallel algorithm would do:
(let* ((a (cond-future (> (size data) 1000) EXPR1)) (b EXPR2)) (combine (cond-touch a) b))
A sequential version would be obtained by defining
(define-macro (cond-future test expr) expr)
(define (cond-touch x) x)
Brad, do you think these forms would be a good basis for implementing parallel algorithms for arithmetic?
Marc
On Dec 12, 2016, at 4:52 PM, Bradley Lucier lucier@math.purdue.edu wrote:
On 12/12/2016 04:39 PM, Marc Feeley wrote:
On Dec 11, 2016, at 9:53 PM, Bradley Lucier <lucier@math.purdue.edu mailto:lucier@math.purdue.edu> wrote:
Is there a way for me to try to run it in parallel?
Brad
<chud-parallel.scm>
I tried your program on my working copy of Gambit (rather brittle, but good enough to run simple tests like this).
Thanks, it looks very interesting!
When I review the source code, I think the first 2000 ms are used to compute sequentially the square root of a 2 million digit bignum, and after the 5000 ms mark, the program computes sequentially (quotient (* p ch-C sqrt-C) (* ch-D (+ q (* p ch-A)))) with large bignums.
So to improve the performance, it would be necessary to improve the parallelism in those two parts.
Absolutely.
The square root could probably be done concurrently with the main computation (for a small number of processors, otherwise a parallel square root algorithm needs to be devised).
Yes, the computation to the square root should be carried out in parallel with the computation of gpg.
The tail part of the computation might be improved by parallel algorithms for multiplication and division.
The two arguments to
(quotient (* p ch-C sqrt-C) (* ch-D (+ q (* p ch-A)))))))
can be computed in parallel.
I appreciate you trying this experiment. The activity log is a pretty good tool!
Brad
I have tried the “cond-future” idea on an improved chud-parallel.scm that exposes some more parallelism. The run time drops from 9 to 7 seconds. There are still many idle processors at the end of the computation due to the sequential final “quotient” (which takes about 1.8 seconds). Note that globally the processors are active only 52% of the time (length of the lowest green horizontal bar), so if the algorithm was perfectly parallel the run time would be about 3.5 seconds or roughly a factor of 4.5x faster than on 1 processor.
I’ve attached the new code and activity log.
Marc
On Dec 12, 2016, at 6:30 PM, Marc Feeley feeley@iro.umontreal.ca wrote:
I’m thinking it would be nice if _num.scm was improved to implement parallel algorithms when a configuration option is used. This could be based on the “future” and “touch” parallelism constructs. Just a quick reminder
(future expr)
conceptually spawns a new thread that computes expr and returns a representative of the result (a “placeholder”), and
(touch placeholder)
waits for the result to be available and returns the result. So the idiom
(let* ((a (future EXPR1)) (b EXPR2)) (combine (touch a) b))
will compute EXPR1 and EXPR2 in parallel and call “combine” when the two results are available.
These forms can be implemented easily with Gambit threads:
(define-macro (future expr) `(thread-start! (make-thread (lambda () ,expr))))
(define (touch x) (thread-join! x))
Since most parallel fork/join algorithms will probably want to fork threads for numbers of a certain size, it is interesting to consider a “conditional” future construct that takes a condition as argument:
(define-macro (cond-future test expr) `(if ,test (thread-start! (make-thread (lambda () ,expr))) ,expr))
(define (cond-touch x) (if (thread? x) (thread-join! x) x))
That way a parallel algorithm would do:
(let* ((a (cond-future (> (size data) 1000) EXPR1)) (b EXPR2)) (combine (cond-touch a) b))
A sequential version would be obtained by defining
(define-macro (cond-future test expr) expr)
(define (cond-touch x) x)
Brad, do you think these forms would be a good basis for implementing parallel algorithms for arithmetic?
Marc
On Dec 12, 2016, at 4:52 PM, Bradley Lucier lucier@math.purdue.edu wrote:
On 12/12/2016 04:39 PM, Marc Feeley wrote:
On Dec 11, 2016, at 9:53 PM, Bradley Lucier <lucier@math.purdue.edu mailto:lucier@math.purdue.edu> wrote:
Is there a way for me to try to run it in parallel?
Brad
<chud-parallel.scm>
I tried your program on my working copy of Gambit (rather brittle, but good enough to run simple tests like this).
Thanks, it looks very interesting!
When I review the source code, I think the first 2000 ms are used to compute sequentially the square root of a 2 million digit bignum, and after the 5000 ms mark, the program computes sequentially (quotient (* p ch-C sqrt-C) (* ch-D (+ q (* p ch-A)))) with large bignums.
So to improve the performance, it would be necessary to improve the parallelism in those two parts.
Absolutely.
The square root could probably be done concurrently with the main computation (for a small number of processors, otherwise a parallel square root algorithm needs to be devised).
Yes, the computation to the square root should be carried out in parallel with the computation of gpg.
The tail part of the computation might be improved by parallel algorithms for multiplication and division.
The two arguments to
(quotient (* p ch-C sqrt-C) (* ch-D (+ q (* p ch-A)))))))
can be computed in parallel.
I appreciate you trying this experiment. The activity log is a pretty good tool!
Brad
Marc:
About a parallel bignum library ...
I think it depends on what we want to accomplish with bignums in Gambit.
When we started, we wanted to use Jon L White's ideas to design a fully portable, relatively efficient while still using naive algorithms, bignum implementation for 32- and 64-bit big- and little-endian machines, all written in Scheme except for a small number of word-level primitives. So we did that.
Then we saw that Karatsuba's algorithm for multiplication is very easy to implement, and we were off to the races.
In the end, we used a relatively novel algorithm for large multiplication (but not too large, good up to about a billion bits).
Given this multiplication algorithm, we added a novel GCD, state-of-the art sqrt (following Zimmermann), and a division based on the usual trick of computing the inverse and multiplying (which helped with conversion of large numbers to strings).
For a while for a few operations for a certain range of size of inputs Gambit was faster than GMP, but this has not been true for a long time. GMP is now faster than Gambit in everything. Much of the low-level routines in GMP are written in assembly code, tuned for various Intel, AMD, and Arm processors.
So now that we might have parallelism, what should we do?
If you want a test-bed for the parallel runtime, bignum arithmetic would be a possibility. But I don't see Gambit beating GMP for any reasonable range of argument sizes and choice of operations.
Other application areas might be just as good a testbed for parallelism.
What are our goals?
Brad
Gambit’s Scheme implementation of bignums is a good design and I want to keep that, even though GMP is generally faster. By implementing bignums in Scheme they are portable, relatively easy to maintain, well integrated in Scheme, and “good citizens” in a multithreaded programming system (the bignum operations can be interrupted for preemptive multithreading of Scheme threads).
I’d like the bignum code to be reviewed to see if there are any “low hanging fruit” for improving performance on a multiprocessor system. For example, I would think the karatsuba multiplication algorithm can be parallelized by adding 3 “future” constructs in the recursion. With a switch, these futures could be removed to get the usual sequential algorithm.
Do you know if GMP uses thread parallelism to make operations faster?
Marc
On Dec 13, 2016, at 12:05 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
About a parallel bignum library ...
I think it depends on what we want to accomplish with bignums in Gambit.
When we started, we wanted to use Jon L White's ideas to design a fully portable, relatively efficient while still using naive algorithms, bignum implementation for 32- and 64-bit big- and little-endian machines, all written in Scheme except for a small number of word-level primitives. So we did that.
Then we saw that Karatsuba's algorithm for multiplication is very easy to implement, and we were off to the races.
In the end, we used a relatively novel algorithm for large multiplication (but not too large, good up to about a billion bits).
Given this multiplication algorithm, we added a novel GCD, state-of-the art sqrt (following Zimmermann), and a division based on the usual trick of computing the inverse and multiplying (which helped with conversion of large numbers to strings).
For a while for a few operations for a certain range of size of inputs Gambit was faster than GMP, but this has not been true for a long time. GMP is now faster than Gambit in everything. Much of the low-level routines in GMP are written in assembly code, tuned for various Intel, AMD, and Arm processors.
So now that we might have parallelism, what should we do?
If you want a test-bed for the parallel runtime, bignum arithmetic would be a possibility. But I don't see Gambit beating GMP for any reasonable range of argument sizes and choice of operations.
Other application areas might be just as good a testbed for parallelism.
What are our goals?
Brad
On 12/13/2016 12:38 PM, Marc Feeley wrote:
I’d like the bignum code to be reviewed to see if there are any “low hanging fruit” for improving performance on a multiprocessor system. For example, I would think the karatsuba multiplication algorithm can be parallelized by adding 3 “future” constructs in the recursion. With a switch, these futures could be removed to get the usual sequential algorithm.
Basically all the bignum code can be parallelized to a greater or lesser extent.
gcd, quotient, and integer-sqrt all depend on multiplication for their speed, so you'd parallelize multiplication. And really, it's only worth parallelizing fft-mul (and karatsuba-mul for arguments that are too large for fft-mul).
The subroutines for fft-mul, together with their complexity in with N-bit arguments, with annotations EP (for embarrassingly parallel):
make-w: O(N), EP make-w-rac: O(N), EP bignum->f64vector-rac: O(N), EP componentwise-rac-multiply: O(N), EP componentwise-rac-multiply-conjugate: O(N), EP componentwise-complex-multiply: O(N), EP f64vector-rac->bignum: O(N), EP + a cleanup to propagate carries if necessary. direct-fft-recursive-4: O(N log N), EP inverse-fft-recursive-4: O(N log N), EP
plus
cleanup: O(N), EP + a cleanup to propagate borrows if necessary.
The routines direct-fft-recursive-4 and inverse-fft-recursive-4 are surprisingly efficient (about 1/2 the speed of FFTW), so even though the other steps are O(N) and the FFTs are O(N log N), the O(N) steps take an unexpectedly (at least to me) large fraction of the time for the total routine.
BTW, the results of make-w can be reused for arguments of the same or smaller sizes, so they should be cached.
To parallelize most of the O(N) routines you'd need to turn the basic loops into kernels with parameters and then break up the arguments accordingly.
Here's the time to compute (square x) where x is (expt 3 100000000), with the timings for all the component subroutines.
(time (##make-f64vector (##fx* two^n 2))) 2 ms real time 0 ms cpu time (0 user, 0 system) 1 collection accounting for 2 ms real time (0 user, 0 system) no bytes allocated 320 minor faults no major faults (time (make-w (##fx- log-two^n 1))) 63 ms real time 64 ms cpu time (52 user, 12 system) no collections 36320 bytes allocated 640 minor faults no major faults (time (make-w-rac log-two^n)) 58 ms real time 56 ms cpu time (44 user, 12 system) no collections 48608 bytes allocated 640 minor faults no major faults (time (bignum->f64vector-rac x a)) 97 ms real time 96 ms cpu time (84 user, 12 system) no collections no bytes allocated 767 minor faults no major faults (time (componentwise-rac-multiply a rac-table)) 82 ms real time 80 ms cpu time (80 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (direct-fft-recursive-4 a table)) 1402 ms real time 1404 ms cpu time (1404 user, 0 system) no collections 80 bytes allocated no minor faults no major faults (time (componentwise-complex-multiply a a)) 90 ms real time 92 ms cpu time (92 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (inverse-fft-recursive-4 a table)) 1293 ms real time 1292 ms cpu time (1292 user, 0 system) no collections 80 bytes allocated no minor faults no major faults (time (componentwise-rac-multiply-conjugate a rac-table)) 85 ms real time 84 ms cpu time (84 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (f64vector-rac->bignum a result result-length)) 233 ms real time 232 ms cpu time (232 user, 0 system) no collections 128 bytes allocated no minor faults no major faults (time (cleanup x y result)) 0 ms real time 0 ms cpu time (0 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (* a a)) 3411 ms real time 3412 ms cpu time (3364 user, 48 system) 1 collection accounting for 2 ms real time (0 user, 0 system) 180384 bytes allocated 2843 minor faults no major faults
This result has 39,624,064 "fdigits" or 316,992,512 bits.
So the FFTs take only about 2700ms of 3400ms total time, all the rest of the stuff takes about 700ms. It's hard to get much speedup here without writing a lot of new code.
Brad
On Feb 6, 2017, at 9:58 PM, Bradley Lucier lucier@math.purdue.edu wrote:
On 12/13/2016 12:38 PM, Marc Feeley wrote:
I’d like the bignum code to be reviewed to see if there are any “low hanging fruit” for improving performance on a multiprocessor system. For example, I would think the karatsuba multiplication algorithm can be parallelized by adding 3 “future” constructs in the recursion. With a switch, these futures could be removed to get the usual sequential algorithm.
Basically all the bignum code can be parallelized to a greater or lesser extent.
gcd, quotient, and integer-sqrt all depend on multiplication for their speed, so you'd parallelize multiplication. And really, it's only worth parallelizing fft-mul (and karatsuba-mul for arguments that are too large for fft-mul).
Another thing: once karatsuba-mul subdivides its arguments into 1/2 billion bit chunks, which when multiplied by fft-mul yields billion-bit results, each of those billion-bit results requires over 2 GB of RAM to compute. So you want to be careful about how many of those you fire off in parallel.
Brad