I forgot to say that this is on a machine where nproc returns 8, but there's really only 4 cores and it's hyperthreaded:
model name : Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz
Brad
On 10/28/2016 05:54 PM, Bradley Lucier wrote:
On 10/25/2016 02:41 PM, Marc Feeley wrote:
Gambit has reached a milestone today with the implementation of the parallel garbage collector. This is an important step towards truly concurrent Gambit threads.
When Gambit is configured with --enable-multiple-threaded-vms the available processors will cooperate to do the garbage collection in parallel.
When I configure with
./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared' --enable-multiple-threaded-vms
It takes this long to run the unit tests:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 3.7s PASSED ALL 122 UNIT TESTS
When I configure with
heine:~/programs/gambit/gambit> ./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared'
the unit tests run in this time:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.6s PASSED ALL 122 UNIT TESTS
That's a big difference. This is on Ubuntu 16.10 with the built-in gcc 6.2.0.
Any suggestions?
Brard
Afficher les réponses par date
I've got something else happening here on OSX (gcc 6.2.0), built with: v4.8.5 20161025181557 x86_64-apple-darwin16.1.0 "./configure '--enable-single-host' '--enable-gcc-opts' '--enable-multiple-threaded-vms' '--enable-activity-log' 'CC=gcc-6'"
The unit tests complete most of the time without issue (and on occasion there is an additional second or so which xactlog shows as post gc wait time) but I've noticed a few times I can run the unit tests and they stall indefinitely and the percent complete is variable. I can try and provide more information if its helpful.
James
On Sat, Oct 29, 2016 at 9:35 AM, Bradley Lucier lucier@math.purdue.edu wrote:
I forgot to say that this is on a machine where nproc returns 8, but there's really only 4 cores and it's hyperthreaded:
model name : Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz
Brad
On 10/28/2016 05:54 PM, Bradley Lucier wrote:
On 10/25/2016 02:41 PM, Marc Feeley wrote:
Gambit has reached a milestone today with the implementation of the parallel garbage collector. This is an important step towards truly concurrent Gambit threads.
When Gambit is configured with --enable-multiple-threaded-vms the available processors will cooperate to do the garbage collection in parallel.
When I configure with
./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared' --enable-multiple-threaded-vms
It takes this long to run the unit tests:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/
libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/ local/netpbm10/lib:
../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 3.7s PASSED ALL 122 UNIT TESTS
When I configure with
heine:~/programs/gambit/gambit> ./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared'
the unit tests run in this time:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/
libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/ local/netpbm10/lib:
../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.6s PASSED ALL 122 UNIT TESTS
That's a big difference. This is on Ubuntu 16.10 with the built-in gcc 6.2.0.
Any suggestions?
Brard
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Thanks for the feedback. I’ve seen similar behavior so I will look into this.
Marc
On Oct 28, 2016, at 9:09 PM, James Baker james@waveformdynamics.com.au wrote:
I've got something else happening here on OSX (gcc 6.2.0), built with: v4.8.5 20161025181557 x86_64-apple-darwin16.1.0 "./configure '--enable-single-host' '--enable-gcc-opts' '--enable-multiple-threaded-vms' '--enable-activity-log' 'CC=gcc-6'"
The unit tests complete most of the time without issue (and on occasion there is an additional second or so which xactlog shows as post gc wait time) but I've noticed a few times I can run the unit tests and they stall indefinitely and the percent complete is variable. I can try and provide more information if its helpful.
James
On Sat, Oct 29, 2016 at 9:35 AM, Bradley Lucier lucier@math.purdue.edu wrote: I forgot to say that this is on a machine where nproc returns 8, but there's really only 4 cores and it's hyperthreaded:
model name : Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz
Brad
On 10/28/2016 05:54 PM, Bradley Lucier wrote:
On 10/25/2016 02:41 PM, Marc Feeley wrote:
Gambit has reached a milestone today with the implementation of the parallel garbage collector. This is an important step towards truly concurrent Gambit threads.
When Gambit is configured with --enable-multiple-threaded-vms the available processors will cooperate to do the garbage collection in parallel.
When I configure with
./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared' --enable-multiple-threaded-vms
It takes this long to run the unit tests:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 3.7s PASSED ALL 122 UNIT TESTS
When I configure with
heine:~/programs/gambit/gambit> ./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared'
the unit tests run in this time:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.6s PASSED ALL 122 UNIT TESTS
That's a big difference. This is on Ubuntu 16.10 with the built-in gcc 6.2.0.
Any suggestions?
Brard
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Could you try again with the latest commit? I changed the implementation of the barrier sync operation which is used by the GC for synchronizing processors.
Marc
On Oct 28, 2016, at 9:09 PM, James Baker james@waveformdynamics.com.au wrote:
I've got something else happening here on OSX (gcc 6.2.0), built with: v4.8.5 20161025181557 x86_64-apple-darwin16.1.0 "./configure '--enable-single-host' '--enable-gcc-opts' '--enable-multiple-threaded-vms' '--enable-activity-log' 'CC=gcc-6'"
The unit tests complete most of the time without issue (and on occasion there is an additional second or so which xactlog shows as post gc wait time) but I've noticed a few times I can run the unit tests and they stall indefinitely and the percent complete is variable. I can try and provide more information if its helpful.
James
On Sat, Oct 29, 2016 at 9:35 AM, Bradley Lucier lucier@math.purdue.edu wrote: I forgot to say that this is on a machine where nproc returns 8, but there's really only 4 cores and it's hyperthreaded:
model name : Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz
Brad
On 10/28/2016 05:54 PM, Bradley Lucier wrote:
On 10/25/2016 02:41 PM, Marc Feeley wrote:
Gambit has reached a milestone today with the implementation of the parallel garbage collector. This is an important step towards truly concurrent Gambit threads.
When Gambit is configured with --enable-multiple-threaded-vms the available processors will cooperate to do the garbage collection in parallel.
When I configure with
./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared' --enable-multiple-threaded-vms
It takes this long to run the unit tests:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 3.7s PASSED ALL 122 UNIT TESTS
When I configure with
heine:~/programs/gambit/gambit> ./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared'
the unit tests run in this time:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.6s PASSED ALL 122 UNIT TESTS
That's a big difference. This is on Ubuntu 16.10 with the built-in gcc 6.2.0.
Any suggestions?
Brard
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
I just rebuilt Gambit and ran the unit tests a dozen times, which while probably not sufficient for a solid stress test would definitely have been enough times to demonstrate the previous behaviour and I couldn't reproduce it. Furthermore the timing for each run is now back down again and each run is taking a very similar time range.
Aside from Gambit the rest of my environment has remained unchanged since I last attempted this.
James
On Tue, Nov 8, 2016 at 8:53 AM, Marc Feeley feeley@iro.umontreal.ca wrote:
Could you try again with the latest commit? I changed the implementation of the barrier sync operation which is used by the GC for synchronizing processors.
Marc
On Oct 28, 2016, at 9:09 PM, James Baker james@waveformdynamics.com.au
wrote:
I've got something else happening here on OSX (gcc 6.2.0), built with:
v4.8.5 20161025181557 x86_64-apple-darwin16.1.0 "./configure '--enable-single-host' '--enable-gcc-opts' '--enable-multiple-threaded-vms' '--enable-activity-log' 'CC=gcc-6'"
The unit tests complete most of the time without issue (and on occasion
there is an additional second or so which xactlog shows as post gc wait time) but I've noticed a few times I can run the unit tests and they stall indefinitely and the percent complete is variable. I can try and provide more information if its helpful.
James
On Sat, Oct 29, 2016 at 9:35 AM, Bradley Lucier lucier@math.purdue.edu
wrote:
I forgot to say that this is on a machine where nproc returns 8, but there's really only 4 cores and it's hyperthreaded:
model name : Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz
Brad
On 10/28/2016 05:54 PM, Bradley Lucier wrote:
On 10/25/2016 02:41 PM, Marc Feeley wrote:
Gambit has reached a milestone today with the implementation of the parallel garbage collector. This is an important step towards truly concurrent Gambit threads.
When Gambit is configured with --enable-multiple-threaded-vms the available processors will cooperate to do the garbage collection in parallel.
When I configure with
./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared' --enable-multiple-threaded-vms
It takes this long to run the unit tests:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/
libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/ local/netpbm10/lib:
../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ##########################################
3.7s
PASSED ALL 122 UNIT TESTS
When I configure with
heine:~/programs/gambit/gambit> ./configure 'CC=gcc -march=native' '--enable-single-host' '--enable-multiple-versions' '--enable-shared'
the unit tests run in this time:
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/
libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/ local/netpbm10/lib:
../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ##########################################
1.6s
PASSED ALL 122 UNIT TESTS
That's a big difference. This is on Ubuntu 16.10 with the built-in gcc 6.2.0.
Any suggestions?
Brard
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Interesting. Without --enable-multiple-threaded-vms I get
------------ TEST 6 (link and execute the code generated) rm -f mix_.c mix.o mix_.o mix LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsc/gsc -:~~bin=../bin,~~lib=../lib,~~include=../include -f -warnings -o mix -exe mix.c LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ./mix -:m4000 > test6.out .188 secs elapsed cpu time heartbeat frequency = 244.68085106382978 Hz diff test6.ok test6.out && \ rm -f test6.out mix.c mix_.c mix.o mix_.o mix
and
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsc/gsc -:~~bin=../bin,~~lib=../lib,~~include=../include -f -i test10.scm ------------ TEST 11 (run unit tests) make ut make[2]: Entering directory '/home/lucier/programs/gambit/gambit/tests' LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.6s PASSED ALL 122 UNIT TESTS
while with it I get
------------ TEST 6 (link and execute the code generated) rm -f mix_.c mix.o mix_.o mix LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsc/gsc -:~~bin=../bin,~~lib=../lib,~~include=../include -f -warnings -o mix -exe mix.c LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ./mix -:m4000 > test6.out .172 secs elapsed cpu time heartbeat frequency = 203.48837209302326 Hz *** possible problem: expected heartbeat frequency = 250. Hz diff test6.ok test6.out && \ rm -f test6.out mix.c mix_.c mix.o mix_.o mix
and
------------ TEST 11 (run unit tests) make ut make[2]: Entering directory '/home/lucier/programs/gambit/gambit/tests' LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.8s PASSED ALL 122 UNIT TESTS
So the timing differences are now small, but the heartbeat frequency is somewhat off.
So, improvement!
Brad
Wait, what does this actually tell us?
2016-11-08 9:23 GMT+08:00 Bradley Lucier lucier@math.purdue.edu:
Interesting. Without --enable-multiple-threaded-vms I get
------------ TEST 6 (link and execute the code generated) rm -f mix_.c mix.o mix_.o mix LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/ local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsc/gsc -:~~bin=../bin,~~lib=../lib,~~include=../include -f -warnings -o mix -exe mix.c LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/ local/Gambit/current/lib:/usr/local/netpbm10/lib: ./mix -:m4000 > test6.out .188 secs elapsed cpu time heartbeat frequency = 244.68085106382978 Hz diff test6.ok test6.out && \ rm -f test6.out mix.c mix_.c mix.o mix_.o mix
and
LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/ local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsc/gsc -:~~bin=../bin,~~lib=../lib,~~include=../include -f -i test10.scm ------------ TEST 11 (run unit tests) make ut make[2]: Entering directory '/home/lucier/programs/gambit/gambit/tests' LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/ local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.6s PASSED ALL 122 UNIT TESTS
while with it I get
------------ TEST 6 (link and execute the code generated) rm -f mix_.c mix.o mix_.o mix LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/ local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsc/gsc -:~~bin=../bin,~~lib=../lib,~~include=../include -f -warnings -o mix -exe mix.c LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/ local/Gambit/current/lib:/usr/local/netpbm10/lib: ./mix -:m4000 > test6.out .172 secs elapsed cpu time heartbeat frequency = 203.48837209302326 Hz *** possible problem: expected heartbeat frequency = 250. Hz diff test6.ok test6.out && \ rm -f test6.out mix.c mix_.c mix.o mix_.o mix
and
------------ TEST 11 (run unit tests) make ut make[2]: Entering directory '/home/lucier/programs/gambit/gambit/tests' LD_LIBRARY_PATH=../lib:../gsi:../gsc:/usr/local/libimobiledevice/lib:/usr/ local/Gambit/current/lib:/usr/local/netpbm10/lib: ../gsi/gsi -:tl,~~bin=../bin,~~lib=../lib,~~include=../include -f ./run-unit-tests.scm [ 122| 0| 0] 100% ########################################## 1.8s PASSED ALL 122 UNIT TESTS
So the timing differences are now small, but the heartbeat frequency is somewhat off.
So, improvement!
Brad _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On 11/07/2016 08:26 PM, Adam wrote:
Wait, what does this actually tell us?
There used to be a big difference in the time to complete the unit tests: with --enable-multiple-threaded-vms it used to take
[ 122| 0| 0] 100% ########################################## 3.7s
and now it takes
[ 122| 0| 0] 100% ########################################## 1.8s
A big improvement. But without --enable-multiple-threaded-vms it takes
[ 122| 0| 0] 100% ########################################## 1.6s
So there's been an improvement.
Brad
And this shows us what?
Btw, isn't the only effect --enable-multiple-threaded-vms should have right now, that execution is slightly faster, as GC is faster now?
2016-11-08 9:31 GMT+08:00 Bradley Lucier lucier@math.purdue.edu:
On 11/07/2016 08:26 PM, Adam wrote:
Wait, what does this actually tell us?
There used to be a big difference in the time to complete the unit tests: with --enable-multiple-threaded-vms it used to take
[ 122| 0| 0] 100% ########################################## 3.7s
and now it takes
[ 122| 0| 0] 100% ########################################## 1.8s
A big improvement. But without --enable-multiple-threaded-vms it takes
[ 122| 0| 0] 100% ########################################## 1.6s
So there's been an improvement.
Brad
The fact is that there is some synchronization overhead when the GC is parallel (the OS threads need to synchronize to run the GC and each of the phases of the GC in unison). So if the heap is small, as is the case for most unit tests, the GC doesn’t accelerate much because there is little parallelism to exploit, but there is an overhead to pay for attempting to do things in parallel. This is a common issue in parallel processing.
Marc
On Nov 7, 2016, at 8:53 PM, Adam adam.mlmb@gmail.com wrote:
And this shows us what?
Btw, isn't the only effect --enable-multiple-threaded-vms should have right now, that execution is slightly faster, as GC is faster now?
2016-11-08 9:31 GMT+08:00 Bradley Lucier lucier@math.purdue.edu: On 11/07/2016 08:26 PM, Adam wrote: Wait, what does this actually tell us?
There used to be a big difference in the time to complete the unit tests: with --enable-multiple-threaded-vms it used to take
[ 122| 0| 0] 100% ########################################## 3.7s
and now it takes
[ 122| 0| 0] 100% ########################################## 1.8s
A big improvement. But without --enable-multiple-threaded-vms it takes
[ 122| 0| 0] 100% ########################################## 1.6s
So there's been an improvement.
Brad
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Aha. Just out of general interest,
* Per GC cycle, how much time are we talking on a random 2Ghz machine, and presuming all GVM threads run all-safe-declared Scheme code only?
* What is the mechanism of the sync to initiate a GC iteration - the first initiating thread setting some global variable, msyinc:ing, and all other GVM threads polling it all the time, or/and by sending some kind of interrupt signal via OS facilities?
* What is the mechanism of the sync for different GC stages - ...?
2016-11-08 10:19 GMT+08:00 Marc Feeley feeley@iro.umontreal.ca:
The fact is that there is some synchronization overhead when the GC is parallel (the OS threads need to synchronize to run the GC and each of the phases of the GC in unison). So if the heap is small, as is the case for most unit tests, the GC doesn’t accelerate much because there is little parallelism to exploit, but there is an overhead to pay for attempting to do things in parallel. This is a common issue in parallel processing.
Marc
On Nov 7, 2016, at 8:53 PM, Adam adam.mlmb@gmail.com wrote:
And this shows us what?
Btw, isn't the only effect --enable-multiple-threaded-vms should have
right now, that execution is slightly faster, as GC is faster now?
2016-11-08 9:31 GMT+08:00 Bradley Lucier lucier@math.purdue.edu: On 11/07/2016 08:26 PM, Adam wrote: Wait, what does this actually tell us?
There used to be a big difference in the time to complete the unit
tests: with --enable-multiple-threaded-vms it used to take
[ 122| 0| 0] 100% ########################################## 3.7s
and now it takes
[ 122| 0| 0] 100% ########################################## 1.8s
A big improvement. But without --enable-multiple-threaded-vms it takes
[ 122| 0| 0] 100% ########################################## 1.6s
So there's been an improvement.
Brad
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On Nov 7, 2016, at 9:50 PM, Adam adam.mlmb@gmail.com wrote:
Aha. Just out of general interest,
- Per GC cycle, how much time are we talking on a random 2Ghz machine, and presuming all GVM threads run all-safe-declared Scheme code only?
It depends on the number of CPUs on your machine (because the cost of synchronization goes up with the number of processors). For 4 CPUs on a 2.6 GHz machine and a minimum size heap, a GC cycle (which of course includes all the synchronizations) takes about 100 microseconds.
What is the mechanism of the sync to initiate a GC iteration - the first initiating thread setting some global variable, msyinc:ing, and all other GVM threads polling it all the time, or/and by sending some kind of interrupt signal via OS facilities?
What is the mechanism of the sync for different GC stages - …?
The phases of the GC are:
1) Setup the stacks and heaps of all the processors 2) Mark the objects that are reachable strongly from roots 3) Mark the objects that are reachable weakly 4) Process gc hash tables and free unreachable still objects 5) Resize heap
These phases are separated by synchronization barriers (so all processors must have finished a phase before any processor starts the next one).
The barrier synchronizations are implemented using a binary tree like structure and time-limited spin-barriers to synchronize a parent processor with its 2 children. So a barrier takes logarithmic time.
To initiate a garbage collection, a processor raises a flag in all the other processor state structures and sends each processor a dummy byte in its self-pipe that is always checked by "select" (this is to ensure that processors waiting on I/O will stop waiting and enter the garbage collector).
Marc
From now and on, "processor" will mean "an OS thread that's running a GVM",
right?
2016-11-08 12:31 GMT+08:00 Marc Feeley feeley@iro.umontreal.ca: ..
The barrier synchronizations are implemented using a binary tree like structure and time-limited spin-barriers to synchronize a parent processor with its 2 children. So a barrier takes logarithmic time.
Cool!
Just out of curiosity for nitty-gritty details:
Which processor is the ultimate parent, is this generated at processors initialization, or at each GC? Is it the processor that trigs the GC?
What's the motivation for a tree-like propagation at all, compared for instance with that the processor that trigs GC would do the sync with all other processors, all by itself?
Finally, as it looks now, in what processor will wills be executed, undefined i.e. random?
On Nov 9, 2016, at 10:28 PM, Adam adam.mlmb@gmail.com wrote:
From now and on, "processor" will mean "an OS thread that's running a GVM", right?
Sort of… A GVM (Gambit virtual machine) is actually a set of “processors” running a Gambit program. Typically a GVM “processor” is mapped to an OS thread. This choice of vocabulary is to abstract the implementation details and impress the idea that conceptually the VM is running on a set of processors, in parallel. In a “on the bare metal” implementation these “processors” would be actual hardware processors. But when running on top of a traditional OS, where it is not possible to access hardware processors directly, then each “processor” is implemented with an OS thread, and it is expected that the OS will be intelligent enough to assign different hardware processors to all these OS threads (with posix threads and Windows the thread affinity is used to help the OS achieve a one-to-one mapping).
2016-11-08 12:31 GMT+08:00 Marc Feeley feeley@iro.umontreal.ca: .. The barrier synchronizations are implemented using a binary tree like structure and time-limited spin-barriers to synchronize a parent processor with its 2 children. So a barrier takes logarithmic time.
Cool!
Just out of curiosity for nitty-gritty details:
Which processor is the ultimate parent, is this generated at processors initialization, or at each GC? Is it the processor that trigs the GC?
When the Gambit process starts, the current thread is considered “processor 0”. After the Scheme library is initialized, the VM is resized to N processors (where N is supplied by the -:pN runtime option). So processor 0 is initially running the primordial Scheme thread. Note however that threads can migrate to another processor to balance the load.
All processors run the Scheme code in parallel and each processor has a heap section in which it does its memory allocations independently from the other processors (so allocation requires no locking, except when a heap section is full and a new heap section needs to be obtained from the pool of free heap sections, but that is relatively infrequent and there is very low contention for the lock). When the pool of free heap sections is exhausted, the processor that is doing the allocation will trigger a GC (so any processor can trigger a GC, and it is possible that more than one processor simultaneously trigger a GC). At that point all processors are interrupted (by raising a flag that is polled regularly) to execute the GC in parallel. This is done using a barrier synchronization (so that the GC starts only after all processors have transitionned from the execution of the main program to the execution of the GC, to avoid that some processors start the GC while others are still allocating or modifying objects). Then, within the GC, barrier synchronizations are also performed to separate each phase of the GC (initialization, assignment of stack and heap sections, marking using strong references, …).
What's the motivation for a tree-like propagation at all, compared for instance with that the processor that trigs GC would do the sync with all other processors, all by itself?
That would take linear time. Logarithmic is faster. Also, it is necessary to have a synchronization mechanism that will tolerate simultaneous triggering of the GC and only do one GC regardless of how many processors triggered a GC. So using a predefined barrier synchronisation primitive is not sufficient. The mechanism implemented in the runtime system allows processors to request a synchronous service (such as garbage collection, or resizing the VM) and the mechanism will sort out which service “wins” (in the case where there are conflicting services requested).
Finally, as it looks now, in what processor will wills be executed, undefined i.e. random?
Currently it is processor 0, but this may change.
Marc
Hi Marc,
Interesting! Thanks for taking the time to describe this.
Find here some more questions, only the first three have practical importance though.
(Aha below by computer processor you mean CPU core.)
How is the parallell heap and GC model fit for different memory coherency models e.g. that of AMD64 (strong) and ARM (weak)?
(I guess those are the two extremes and that other architectures like IBM Power, Sparc, Mips, you name it, land between those.)
What is the execution model for Gambit threads, is the default mode that execution is spread across all GVM processors?
If I will run code in Gambit that will be blocking, e.g. blocking system calls such as DNS lookup on open-tcp-client, and C code, can I devote a given number of GVM processors to that?
Would there be some way for me to run blocking C code and GVM processors in the same OS thread conveniently, so that when I go into the C world I make some kind of "stamp out" so that a GC not would block until that OS thread would return to the Scheme world? How expensive will such a "stamp out" be?
2016-11-10 21:00 GMT+08:00 Marc Feeley feeley@iro.umontreal.ca:
On Nov 9, 2016, at 10:28 PM, Adam adam.mlmb@gmail.com wrote:
From now and on, "processor" will mean "an OS thread that's running a
GVM", right?
Sort of… A GVM (Gambit virtual machine) is actually a set of “processors” running a Gambit program. Typically a GVM “processor” is mapped to an OS thread. This choice of vocabulary is to abstract the implementation details and impress the idea that conceptually the VM is running on a set of processors, in parallel. In a “on the bare metal” implementation these “processors” would be actual hardware processors. But when running on top of a traditional OS, where it is not possible to access hardware processors directly, then each “processor” is implemented with an OS thread, and it is expected that the OS will be intelligent enough to assign different hardware processors to all these OS threads (with posix threads and Windows the thread affinity is used to help the OS achieve a one-to-one mapping).
2016-11-08 12:31 GMT+08:00 Marc Feeley feeley@iro.umontreal.ca: .. The barrier synchronizations are implemented using a binary tree like
structure and time-limited spin-barriers to synchronize a parent processor with its 2 children. So a barrier takes logarithmic time.
Cool!
Just out of curiosity for nitty-gritty details:
Which processor is the ultimate parent, is this generated at processors
initialization, or at each GC? Is it the processor that trigs the GC?
When the Gambit process starts, the current thread is considered “processor 0”. After the Scheme library is initialized, the VM is resized to N processors (where N is supplied by the -:pN runtime option). So processor 0 is initially running the primordial Scheme thread. Note however that threads can migrate to another processor to balance the load.
All processors run the Scheme code in parallel and each processor has a heap section in which it does its memory allocations independently from the other processors (so allocation requires no locking, except when a heap section is full and a new heap section needs to be obtained from the pool of free heap sections, but that is relatively infrequent and there is very low contention for the lock). When the pool of free heap sections is exhausted, the processor that is doing the allocation will trigger a GC (so any processor can trigger a GC, and it is possible that more than one processor simultaneously trigger a GC).
Just curious, where are the malloc():s done (to increase the total heap space available for use for live objects)?
Also, is there relevance in changing memory block size to the page size e.g. 4096 bytes from the previous 512 bytes, as to minimize the possibility that two processors would write to memory addresses within the same page, hence congesting the memory coherence logics on AMD64?
(I.e. the performance difference on AMD64 between core1 and core2 doing writes to the same memory page concurrently, and doing writes to different pages concurrently, is enormous. If I recall right this paper "What Every Programmer Should Know About Memory" by Ulrich Drepper https://www.akkadia.org/drepper/cpumemory.pdf showed some measurements with like 10000x performance differences.)
Not sure if malloc() tends to be aligned to page limits, anyhow I guess that would be a healthy assumption.
At that point all processors are interrupted (by raising a flag that is polled regularly) to execute the GC in parallel. This is done using a barrier synchronization (so that the GC starts only after all processors have transitionned from the execution of the main program to the execution of the GC, to avoid that some processors start the GC while others are still allocating or modifying objects). Then, within the GC, barrier synchronizations are also performed to separate each phase of the GC (initialization, assignment of stack and heap sections, marking using strong references, …).
What's the motivation for a tree-like propagation at all, compared for
instance with that the processor that trigs GC would do the sync with all other processors, all by itself?
That would take linear time. Logarithmic is faster.
Wait. Doing a loop from 0 to N cores (which is generally below 100 or 1000 anyhow) to set a memory address, would be negligible speed on all architectures.
for (i = 0; i < processors; i++) { processor[i]->going_into_gc = true; }
Is this propagation used not only for signalling that you're going into a synchronous operation/GC, but also for complex operations like workload within the marking process?
Also, it is necessary to have a synchronization mechanism that will tolerate simultaneous triggering of the GC and only do one GC regardless of how many processors triggered a GC. So using a predefined barrier synchronisation primitive is not sufficient. The mechanism implemented in the runtime system allows processors to request a synchronous service (such as garbage collection, or resizing the VM) and the mechanism will sort out which service “wins” (in the case where there are conflicting services requested).
Is the "resizing the VM" about changing total heap size, or changing the number of processors, or either?
Are more synchronous services coming up?
Ensuring that the GC only is triggered once, how do you do that - say the GC trigging logic in one processor is if (gc found to be needed) { workup; go into gc; }, if that trigs in more processors at exactly the same time, just very approximately how do you make it go into gc exactly once?
2016-11-11 10:31 GMT+08:00 Adam adam.mlmb@gmail.com: ..
Not sure if malloc() tends to be aligned to page limits, anyhow I guess that would be a healthy assumption.
Reading some malloc implementation man pages, it is a frequent occurrence that they say that allocations of pagesize and up, are page-aligned. So yes that should be a healthy assumption.