Aha. Just out of general interest,
* Per GC cycle, how much time are we talking on a random 2Ghz machine, and presuming all GVM threads run all-safe-declared Scheme code only?
* What is the mechanism of the sync to initiate a GC iteration - the first initiating thread setting some global variable, msyinc:ing, and all other GVM threads polling it all the time, or/and by sending some kind of interrupt signal via OS facilities?
* What is the mechanism of the sync for different GC stages - ...?
2016-11-08 10:19 GMT+08:00 Marc Feeley feeley@iro.umontreal.ca:
The fact is that there is some synchronization overhead when the GC is parallel (the OS threads need to synchronize to run the GC and each of the phases of the GC in unison). So if the heap is small, as is the case for most unit tests, the GC doesn’t accelerate much because there is little parallelism to exploit, but there is an overhead to pay for attempting to do things in parallel. This is a common issue in parallel processing.
Marc
On Nov 7, 2016, at 8:53 PM, Adam adam.mlmb@gmail.com wrote:
And this shows us what?
Btw, isn't the only effect --enable-multiple-threaded-vms should have
right now, that execution is slightly faster, as GC is faster now?
2016-11-08 9:31 GMT+08:00 Bradley Lucier lucier@math.purdue.edu: On 11/07/2016 08:26 PM, Adam wrote: Wait, what does this actually tell us?
There used to be a big difference in the time to complete the unit
tests: with --enable-multiple-threaded-vms it used to take
[ 122| 0| 0] 100% ########################################## 3.7s
and now it takes
[ 122| 0| 0] 100% ########################################## 1.8s
A big improvement. But without --enable-multiple-threaded-vms it takes
[ 122| 0| 0] 100% ########################################## 1.6s
So there's been an improvement.
Brad
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list