Afficher les réponses par date
I wonder whether the (not interrupts-enabled) is a good idea: I expect that if you're handling huge strings, interactivity of other threads will suffer.
(Hm, thinking about it: for huge strings (those bigger than a cpu cache will hold), it could be worthwhile to run the comparisons and hashing activities in a set of separate OS threads; then you could not only run blocking code (even handcoded C code or just memcmp for comparisons, and e.g. lookup3.c by Bob Jenkins for hashing), but also make use of multiple processors.
I'd offer to implement it for you using pthreads or with a thread abstraction layer if someone points out a good portable one (but it has to offer a way to hook into the Gambit scheduler--using pipes for synchronization is the easy way but rather posixish); I'm not sure whether that should be in the Gambit core, though, it could live in modules (yeah which module system I hear you ask; I'd do in mine first just because I've got lookup3 and posix bindings already working), but maybe it would even make sense to put a system threading engine for lowlevel work into Gambit already, it could prove useful for more things than just that?)
Christian.
On Feb 22, 2008, at 4:07 PM, Christian Jaeger wrote:
I wonder whether the (not interrupts-enabled) is a good idea: I expect that if you're handling huge strings, interactivity of other threads will suffer.
Marc has convinced me that disabling interrupts is not a good idea; a few tests show that it doesn't improve performance on those small, tight loops where you're (currently) sure you don't allocate any memory (probably because Gambit now uses __builtin_expect to tell gcc that those POLLs are unlikely to be taken), and in other cases it's not clear that the performance difference is caused by anything more than a different code alignment, etc.
So I've gone through the benchmarks and re-enabled interrupts. The only program it makes a difference in is fannkuch, of a little less than 20%.
Brad
Did you do some tests on code size too? We have to be carefull about that too as Gambit is already fairly bloated when it comes to generated code size.
On Fri, Feb 22, 2008 at 3:12 PM, Bradley Lucier lucier@math.purdue.edu wrote:
It cuts about 5% off some benchmarks.
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On Feb 22, 2008, at 4:40 PM, Guillaume Cartier wrote:
Did you do some tests on code size too? We have to be carefull about that too as Gambit is already fairly bloated when it comes to generated code size.
On Fri, Feb 22, 2008 at 3:12 PM, Bradley Lucier lucier@math.purdue.edu wrote:
It cuts about 5% off some benchmarks.
It reduces code size because it gets rid of POLLs in some tight loops in Gambit's runtime library. It has no effect on the size of user- compiled code. If you want to make the gambit runtime smaller, I suggest
(##define-macro (use-fast-bignum-algorithms) #f)
in lib/_num.scm and then stripping the executables.
As to Christian's comments about unresponsiveness---there are already many tight loops in the gambit runtime that disable interrupts to gain performance. I think if you want to throw around strings that are hundreds of megabytes long then you are willing to accepts some unresponsiveness.
Brad
Bradley Lucier wrote:
As to Christian's comments about unresponsiveness---there are already many tight loops in the gambit runtime that disable interrupts to gain performance. I think if you want to throw around strings that are hundreds of megabytes long then you are willing to accepts some unresponsiveness.
Well, not necessarily; garbage collection may still run fast since still objects don't have to be copied, and maybe there are no (other) places that require locking times proportional to the size of objects, I don't know. Well, I'm the one having suggested to use separate unix processes for independence a few times, and I don't have any numbers, so I won't protest. But I suggest to make the purpose of such optimizations visible and easily removable in the sources. I've made an alternative patch with that change (it also defines the combine macro only once--avoid duplication whenever possible).
Be aware that this is untested: I wanted to test it, but couldn't figure out how to compile Gambit from Mercurial.
The Page http://dynamo.iro.umontreal.ca/~gambit/wiki/index.php/How_to_Contribute suggests this should work:
$ make bootstrap making all in include make[1]: Entering directory `/home/chris/schemedevelopment/gambit/gambit.git/include' make[1]: Leaving directory `/home/chris/schemedevelopment/gambit/gambit.git/include' making all in lib make[1]: Entering directory `/home/chris/schemedevelopment/gambit/gambit.git/lib' LD_LIBRARY_PATH=../lib:../gsi:../gsc: ../gsc-comp -:=.. -f -c -check _io.scm /bin/sh: ../gsc-comp: No such file or directory make[1]: *** [_io.c] Error 127 make[1]: Leaving directory `/home/chris/schemedevelopment/gambit/gambit.git/lib' make: *** [all-recursive] Error 1
If I symlink gsc from my previous Gambit installation on that machine (4.0 beta 21), I'm getting:
... make[1]: Entering directory `/home/chris/schemedevelopment/gambit/gambit.git/lib' LD_LIBRARY_PATH=../lib:../gsi:../gsc: ../gsc-comp -:=.. -f -c -check _io.scm gcc -I../include -I. -Wall -W -Wno-unused -O1 -fno-math-errno -fschedule-insns2 -fno-trapping-math -fno-strict-aliasing -fwrapv -fomit-frame-pointer -fPIC -fno-common -mieee-fp -DHAVE_CONFIG_H -D___PRIMAL -D___LIBRARY -D___GAMBCDIR=""/usr/local/Gambit-C/v4.2.2"" -D___SYS_TYPE_CPU=""i686"" -D___SYS_TYPE_VENDOR=""pc"" -D___SYS_TYPE_OS=""linux-gnu"" -c _io.c In file included from _io.c:1248: ../include/gambit.h:19:30: error: gambit-not402002.h: No such file or directory In file included from _io.c:1248: ../include/gambit.h:6609: error: expected specifier-qualifier-list before '___SCMOBJ' ../include/gambit.h:6672: error: expected specifier-qualifier-list before '___U32' ../include/gambit.h:6683: error: expected specifier-qualifier-list before '___U32' ../include/gambit.h:6728: error: expected '=', ',', ';', 'asm' or '__attribute__' before '___symkey_struct' ...
Creating an empty gambit-not402002.h file in the include directory (it's being included by the include/gambit.h file) doesn't help.
What's the problem?
BTW I've started playing around with Tailor(*) to convert between Mercurial and Git; you can see the two changes of my patch in separate pieces from the following gitweb URL:
http://scheme.mine.nu/dyn/gitweb?p=gambit;a=shortlog;h=refs/heads/stringspee...
But I haven't found out yet how to merge the Git changesets back to Mercurial (Tailor docs suggest it is possible).
Christian.
On Feb 23, 2008, at 3:21 PM, Christian Jaeger wrote:
Be aware that this is untested: I wanted to test it, but couldn't figure out how to compile Gambit from Mercurial.
The Page http://dynamo.iro.umontreal.ca/~gambit/wiki/index.php/ How_to_Contribute suggests this should work:
$ make bootstrap
You need to do this with a clean 4.2.2 installation, before any of the .scm files have been modified. Then
hg pull make update make
and perhaps iterate the last two. (I haven't yet worked out precisely how to pass through a 4.X level version number change.)
+;; Global tweaking knobs:
+(##define-macro (macro-not-interrupts-enabled-for-speed)
- '(##declare (not interrupts-enabled)))
That's a really good idea. (It's in the same spirit as "(##define- macro (use-fast-bignum-algorithms) #t)" in _num.scm, which trades off speed for space.) Any loops in the bignum library that are linear in the size of the bignum have interrupts disabled for speed; disabling interrupts may not be so important since Marc started using the builtin_expect form to tell gcc that it is unlikely that interrupts will be taken at POLLs.
There are a number of places in _std.scm where raw "memmove"s are used for speed. I suppose one could audit all uses of (not interrupts-enabled) in the runtime to see why each of them was put there.
+(define-macro (combine a b)
- `(let ((a ,a)
(b ,b))
(##fixnum.bitwise-and
(##fixnum.* (##fixnum.+ a (##fixnum.arithmetic-shift-left b 1))
331804471)
(macro-max-fixnum32))))
That's not a bad idea either. Perhaps if it's not local to these functions it could be renamed hash-combine or something like that.
Brad