[gambit-list] newbie: bitwise and arithmetic runtime speed for u64

Paolo pmontrasi at gmail.com
Sun May 17 14:39:02 EDT 2020


Hi all, the macro approach is giving about x2 speed even if not all bitwise operation are using it. Still I am using bignums to pass around bit boards in my code.

compiled binary with regular bitwise operation
> position fen r3k2r/p1ppqpb1/bn2pnp1/3PN3/1p2P3/2N2Q1p/PPPBBPPP/R3K2R w KQkq - 0 1
> perft depth 4
> info string depth= 4 nodes= 4085603 time= 9604. nps= 425406

compiled binary with the macro approach
> position fen r3k2r/p1ppqpb1/bn2pnp1/3PN3/1p2P3/2N2Q1p/PPPBBPPP/R3K2R w KQkq - 0 1
> perft depth 4
> info string depth= 4 nodes= 4085603 time= 5795. nps= 705022

I had to fix lots of bottlenecks	in my code to see the nice boost this low level code can give.
Thanks!

Paolo

> Il giorno 27 apr 2020, alle ore 00:20, Marc Feeley <feeley at iro.umontreal.ca> ha scritto:
> 
> 
> Marc
> 
> 
> 
>> On Apr 26, 2020, at 9:48 AM, Paolo <pmontrasi at gmail.com> wrote:
>> 
>> Hi Brad, thank you for your suggestion.
>> I ended up in testing something similar to the following example
>> 
>> (define (u64-xor . args-list)
>>  (##bignum.normalize! 
>>    (fold
>>      (lambda (x big) 
>>        (let ((x-big (if (fixnum? x) (##fixnum->bignum x) x)))
>>          (##bignum.adigit-bitwise-xor! big 0 x-big 0)
>>          big))
>>      (##bignum.make 2 ##bignum.adigit-zeros #f)
>>      args-list)))
>> 
>> it worked but with no noticeable improvements and this fact helped me in looking elsewhere to find speed problems … I found a lot of them in my code of course ;-)
>> 
>> Well I tried to do my best to fix most of the performance issues and I am now pretty happy with what I have come to, therefore I point you to my code in case you are looking for a fun "chess scheme" challenge.
>> 
>> https://github.com/pmon/coronachess
>> 
>> Thank you for your help, my best
>> Paolo
> 
> Nice!  I’ll have to try it out… As you noticed the name Gambit has its origins in chess… I used to play regularly.  A gambit is a kind of scheme… and it is a calculated risk (which felt quite appropriate for my PhD work which was also risky).
> 
> To do high speed calculations on raw 64 bit integers, I would tend to use u64vectors (or even u8vectors) to store the 64 bit integers and to drop down to C when some operation on these integers must be done without creating bignums.  Something along these lines:
> 
> (declare
>  (standard-bindings)
>  (extended-bindings)
>  (not safe)
> )
> 
> (c-declare "#define ELEM0(u64vect) ___BODY_AS(u64vect,___tSUBTYPED)[0]")
> 
> (define-macro (u64-xor! v1 v2) ;; v1[0] = v1[0] ^ v2[0]
>  `(##c-code "ELEM0(___ARG1) ^= ELEM0(___ARG2);" v1 v2))
> 
> (define v1 (u64vector #x0123456789ABCDEF))
> (define v2 (u64vector #x00FF00FF00FF00FF))
> 
> (println (number->string (u64vector-ref v1 0) 16))
> (println (number->string (u64vector-ref v2 0) 16))
> 
> (u64-xor! v1 v2)
> 
> (println (number->string (u64vector-ref v1 0) 16))
> 
> You could create a small library of such macros for the various 64 bit operations, and try to avoid as much as possible conversions to and from bignums which probably incurs a high overhead in your application.
> 
> Marc

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.iro.umontreal.ca/pipermail/gambit-list/attachments/20200517/c4d0c3c4/attachment.htm>


More information about the Gambit-list mailing list