On 12/13/2016 12:38 PM, Marc Feeley wrote:
I’d like the bignum code to be reviewed to see if there are any “low hanging fruit” for improving performance on a multiprocessor system. For example, I would think the karatsuba multiplication algorithm can be parallelized by adding 3 “future” constructs in the recursion. With a switch, these futures could be removed to get the usual sequential algorithm.
Basically all the bignum code can be parallelized to a greater or lesser extent.
gcd, quotient, and integer-sqrt all depend on multiplication for their speed, so you'd parallelize multiplication. And really, it's only worth parallelizing fft-mul (and karatsuba-mul for arguments that are too large for fft-mul).
The subroutines for fft-mul, together with their complexity in with N-bit arguments, with annotations EP (for embarrassingly parallel):
make-w: O(N), EP make-w-rac: O(N), EP bignum->f64vector-rac: O(N), EP componentwise-rac-multiply: O(N), EP componentwise-rac-multiply-conjugate: O(N), EP componentwise-complex-multiply: O(N), EP f64vector-rac->bignum: O(N), EP + a cleanup to propagate carries if necessary. direct-fft-recursive-4: O(N log N), EP inverse-fft-recursive-4: O(N log N), EP
plus
cleanup: O(N), EP + a cleanup to propagate borrows if necessary.
The routines direct-fft-recursive-4 and inverse-fft-recursive-4 are surprisingly efficient (about 1/2 the speed of FFTW), so even though the other steps are O(N) and the FFTs are O(N log N), the O(N) steps take an unexpectedly (at least to me) large fraction of the time for the total routine.
BTW, the results of make-w can be reused for arguments of the same or smaller sizes, so they should be cached.
To parallelize most of the O(N) routines you'd need to turn the basic loops into kernels with parameters and then break up the arguments accordingly.
Here's the time to compute (square x) where x is (expt 3 100000000), with the timings for all the component subroutines.
(time (##make-f64vector (##fx* two^n 2))) 2 ms real time 0 ms cpu time (0 user, 0 system) 1 collection accounting for 2 ms real time (0 user, 0 system) no bytes allocated 320 minor faults no major faults (time (make-w (##fx- log-two^n 1))) 63 ms real time 64 ms cpu time (52 user, 12 system) no collections 36320 bytes allocated 640 minor faults no major faults (time (make-w-rac log-two^n)) 58 ms real time 56 ms cpu time (44 user, 12 system) no collections 48608 bytes allocated 640 minor faults no major faults (time (bignum->f64vector-rac x a)) 97 ms real time 96 ms cpu time (84 user, 12 system) no collections no bytes allocated 767 minor faults no major faults (time (componentwise-rac-multiply a rac-table)) 82 ms real time 80 ms cpu time (80 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (direct-fft-recursive-4 a table)) 1402 ms real time 1404 ms cpu time (1404 user, 0 system) no collections 80 bytes allocated no minor faults no major faults (time (componentwise-complex-multiply a a)) 90 ms real time 92 ms cpu time (92 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (inverse-fft-recursive-4 a table)) 1293 ms real time 1292 ms cpu time (1292 user, 0 system) no collections 80 bytes allocated no minor faults no major faults (time (componentwise-rac-multiply-conjugate a rac-table)) 85 ms real time 84 ms cpu time (84 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (f64vector-rac->bignum a result result-length)) 233 ms real time 232 ms cpu time (232 user, 0 system) no collections 128 bytes allocated no minor faults no major faults (time (cleanup x y result)) 0 ms real time 0 ms cpu time (0 user, 0 system) no collections no bytes allocated no minor faults no major faults (time (* a a)) 3411 ms real time 3412 ms cpu time (3364 user, 48 system) 1 collection accounting for 2 ms real time (0 user, 0 system) 180384 bytes allocated 2843 minor faults no major faults
This result has 39,624,064 "fdigits" or 316,992,512 bits.
So the FFTs take only about 2700ms of 3400ms total time, all the rest of the stuff takes about 700ms. It's hard to get much speedup here without writing a lot of new code.
Brad