I'm interested in using multithreaded C++ code along with Gambit. From what I understand, Gambit does not support native multithreading, but what about the FFI? Am I allowed to have multiple C++ threads running alongside Gambit?
Additionally (in case I missed something), is there anyway to get Gambit itself running in multiple native threads? I don't need full-blown functionality, but I'll take anything that's availible.
Many thanks,
Afficher les réponses par date
Hallo,
On Thu, Mar 22, 2012 at 11:28 AM, Patrick Bene pubbybene@gmail.com wrote:
I'm interested in using multithreaded C++ code along with Gambit. From what I understand, Gambit does not support native multithreading, but what about the FFI? Am I allowed to have multiple C++ threads running alongside Gambit?
Additionally (in case I missed something), is there anyway to get Gambit itself running in multiple native threads? I don't need full-blown functionality, but I'll take anything that's availible.
Some ideas:
- Start more than one Gambit process and synchronise them using IPC, like shared memory or a local UNIX socket. Each Gambit process can run thousands of green threads (they will all use 1 CPU).
- Add a C/C++ function to Gambit to somehow start and talk to the other OS threads. In this case the other threads's code cannot run Gambit code, because it is not thread-safe. This is better if the tasks are simple or you can find C/C++ libraries to help.
Cheers,
On 2012-03-22, at 6:28 AM, Patrick Bene wrote:
I'm interested in using multithreaded C++ code along with Gambit. From what I understand, Gambit does not support native multithreading, but what about the FFI? Am I allowed to have multiple C++ threads running alongside Gambit?
Yes you can have multiple OS threads in a process that is executing Gambit compiled code. Currently, you are constrained to only have a single OS thread at a time executing Gambit compiled code. The OS thread can change over time, but only one executing Scheme code at a time. For a simple example, check the directory examples/pthread in the Gambit source distribution.
Additionally (in case I missed something), is there anyway to get Gambit itself running in multiple native threads? I don't need full-blown functionality, but I'll take anything that's availible.
That's not currently possible because the Scheme global variables, and some global state of the Gambit VM, are implemented using global C/C++ structures. What is needed (and something that is on my TODO) is to allow instances of this state to be created dynamically (essentially a constructor for the Gambit VM). That way several instances of Gambit can coexist in the same OS process. Unfortunately, this will slow down accesses to global variables, because an indirection will be needed.
If you want to take advantage of parallel execution on a multicore, currently the only approach is to use multiple OS processes. Each OS process runs an instance of the Gambit program. They can communicate using the network, a pipe, a fifo, and any other IPC mechanism supported by your OS (see for example http://beej.us/guide/bgipc/output/html/singlepage/bgipc.html). This is how Termite was implemented (http://code.google.com/p/termite/).
Marc
Dear Marc, I'm curious,
Den 22 mars 2012 14:46 skrev Marc Feeley feeley@iro.umontreal.ca:
That's not currently possible because the Scheme global variables, and
some global state of the Gambit VM, are implemented using global C/C++ structures. What is needed (and something that is on my TODO) is to allow instances of this state to be created dynamically (essentially a constructor for the Gambit VM). That way several instances of Gambit can coexist in the same OS process.
Unfortunately, this will slow down accesses to global variables, because an indirection will be needed.
So with this several-gambit-vm:s-coexisting-in-the-same-os-process option, each read and write to a global var will require one more indirection and that's it?
Does access to Gambit runtime procedures like |list| and |open-string| qualify as this?
You mean one indirection i.e. just following one pointer once right? If so I suppose this gives a max approx 0.5% performance decrease for typical code - if so, for when this functionality is needed, it's really worth it.
Will there be a configure argument to disable/enable this option both now, and later when running in SMP mode?
Kind regards, Mikael
On 2012-03-28, at 8:59 AM, Mikael wrote:
Dear Marc, I'm curious,
Den 22 mars 2012 14:46 skrev Marc Feeley feeley@iro.umontreal.ca:
That's not currently possible because the Scheme global variables, and some global state of the Gambit VM, are implemented using global C/C++ structures. What is needed (and something that is on my TODO) is to allow instances of this state to be created dynamically (essentially a constructor for the Gambit VM). That way several instances of Gambit can coexist in the same OS process.
Unfortunately, this will slow down accesses to global variables, because an indirection will be needed.
So with this several-gambit-vm:s-coexisting-in-the-same-os-process option, each read and write to a global var will require one more indirection and that's it?
The "indirection" might be more than a pointer indirection. The point is that each instance of the VM will have its own set of global variables. I think it can be implemented with a single indirection by declaring that all the VMs have the same set of global variables (but each instance of a global variable has its own value). But I haven't implemented this yet.
Does access to Gambit runtime procedures like |list| and |open-string| qualify as this?
Yes. Even if it is very likely that all the instances contain the same value.
You mean one indirection i.e. just following one pointer once right? If so I suppose this gives a max approx 0.5% performance decrease for typical code - if so, for when this functionality is needed, it's really worth it.
Only by benchmarking this will I be able to tell if it is 0.5% or 5%. It could be 50% for some programs constantly accessing global variables.
Will there be a configure argument to disable/enable this option both now, and later when running in SMP mode?
Absolutely. Gambit's architecture, with the gambit.h header file, makes it easy to have a switch to enable the existence of multiple VM instances. When this is disabled, I expect the same performance as now.
Marc
On 2012-03-28, at 9:26 AM, Marc Feeley wrote:
On 2012-03-28, at 8:59 AM, Mikael wrote:
Dear Marc, I'm curious,
Den 22 mars 2012 14:46 skrev Marc Feeley feeley@iro.umontreal.ca:
That's not currently possible because the Scheme global variables, and some global state of the Gambit VM, are implemented using global C/C++ structures. What is needed (and something that is on my TODO) is to allow instances of this state to be created dynamically (essentially a constructor for the Gambit VM). That way several instances of Gambit can coexist in the same OS process.
Unfortunately, this will slow down accesses to global variables, because an indirection will be needed.
So with this several-gambit-vm:s-coexisting-in-the-same-os-process option, each read and write to a global var will require one more indirection and that's it?
The "indirection" might be more than a pointer indirection. The point is that each instance of the VM will have its own set of global variables. I think it can be implemented with a single indirection by declaring that all the VMs have the same set of global variables (but each instance of a global variable has its own value). But I haven't implemented this yet.
Does access to Gambit runtime procedures like |list| and |open-string| qualify as this?
Yes. Even if it is very likely that all the instances contain the same value.
You mean one indirection i.e. just following one pointer once right? If so I suppose this gives a max approx 0.5% performance decrease for typical code - if so, for when this functionality is needed, it's really worth it.
Only by benchmarking this will I be able to tell if it is 0.5% or 5%. It could be 50% for some programs constantly accessing global variables.
I now have some data. For an experiment, I have modified gambit.h and a few runtime files to allow multiple instances of global variables. In the current Gambit, a Scheme global variable is basically a C global variable. In my experiment, each Scheme global variable is assigned an index which is used to access an array of global variables within the "processor state" structure (which can be instantiated multiple times). The array of global variables is either "inline", at the end of the processor state structure, or is accessed using a pointer in the processor state structure (which means there is an extra level of indirection, but it makes it easier to grow the array when needed).
I compiled with each implementation of the global variables this program
(declare (standard-bindings) (extended-bindings) (fixnum) (not safe) )
(define res 0)
(define (fib n) (if (fx< n 2) (set! res n) (begin (fib (fx- n 1)) (let ((tmp res)) (fib (fx- n 2)) (set! res (+ res tmp))))))
(time (fib 40))
which is good-old fibonacci, but written in a style which accesses global variables frequently. I also measured the execution of a more realistic large program, the Gambit compiler itself compiling lib/_io.scm, with each global variable implementation method.
The execution time in seconds (and the relative time) are
fib compiler standard Gambit 2.473 (1.00) 5.393 (1.00) inline array of global vars 2.680 (1.08) 5.580 (1.03) pointer to array of global vars 2.764 (1.12) 5.574 (1.03)
So the overhead of supporting multiple instances of the Gambit VM is between 8% and 12% for the version of fib which accesses global variables frequently, and is 3% for the Gambit compiler.
The overhead seems to be acceptably low, but for some programs accessing global variables frequently it could be noticeable (when accessing global variables even more frequently than in the above program).
Marc
So the overhead of supporting multiple instances of the Gambit VM is between 8% and 12% for the version of fib which accesses global variables frequently, and is 3% for the Gambit compiler.
This seems to be very encouraging news!
I was thinking how this might apply to the situation on iOS, where we may want to wrap certain asynchronous activities in Objective-C blocks. Would it be possible to have code that is destined to be called asynchronously compiled in the slower, thread-safe style while the main program loop sticks with the single-threaded fast/global style?
This is probably not feasible in general since your entire runtime library would have to be compiled twice and stored in memory twice?
On Apr 20, 2012, at 7:48 PM, Marc Feeley wrote:
The overhead seems to be acceptably low, but for some programs accessing global variables frequently it could be noticeable
Just in case you're taking an informal poll, I think that this relatively small overhead would be more than compensated by the gains to be had on multicore processors which are now nearly ubiquitous, even on mobile devices.
warmest regards,
Ralph
Raffael Cavallaro raffaelcavallaro@me.com
On 2012-04-20, at 10:01 PM, Raffael Cavallaro wrote:
On Apr 20, 2012, at 7:48 PM, Marc Feeley wrote:
The overhead seems to be acceptably low, but for some programs accessing global variables frequently it could be noticeable
Just in case you're taking an informal poll, I think that this relatively small overhead would be more than compensated by the gains to be had on multicore processors which are now nearly ubiquitous, even on mobile devices.
Multiple global environments are useful for creating separate instances of the Gambit VM in the same OS process which do not share the global variables. This might be useful in a "shared nothing" multithreading model (like Erlang/Termite). But for a shared-memory model (like Multilisp) a shared global environment would be appropriate, with the advantage of no overhead for accessing global variables.
Marc
On Apr 20, 2012, at 11:50 PM, Marc Feeley wrote:
Multiple global environments are useful for creating separate instances of the Gambit VM in the same OS process which do not share the global variables. This might be useful in a "shared nothing" multithreading model (like Erlang/Termite). But for a shared-memory model (like Multilisp) a shared global environment would be appropriate, with the advantage of no overhead for accessing global variables.
Would those separate VMs run in separate OS threads (which the OS could then schedule on separate cores even if they are running in the same process)? IOW, for many of us, the advantage of multiple VM instances is that they could take advantage of multiple cores. Having multiple VM instances in the same OS process seems, to some of us at least, somewhat less useful.
warmest regards,
Ralph
Raffael Cavallaro raffaelcavallaro@me.com
Re Marc:
Multiple global environments are useful for creating separate instances of the Gambit VM in the same OS process which do not share the global variables. This might be useful in a "shared nothing" multithreading model (like Erlang/Termite). But for a shared-memory model (like Multilisp) a shared global environment would be appropriate, with the advantage of no overhead for accessing global variables.
That there will be no overhead on global variable access with 'SMP Gambit' is great!
Re Raffael:
Would those separate VMs run in separate OS threads (which the OS could then schedule on separate cores even if they are running in the same process)?
IOW, for many of us, the advantage of multiple VM instances is that they could take advantage of multiple cores. Having multiple VM instances in the same OS process seems, to some of us at least, somewhat less useful.
Mm - currently when SMP shared-memory-model support is not available as per Marc's PhD thesis, it's great that we now have the option of compiling Gambit to support concurrent use of multiple OS threads, and even greater when this SMP support will be there :-D
Kind regards, Mikael
On Apr 20, 2012, at 7:48 PM, Marc Feeley wrote:
The overhead seems to be acceptably low, but for some programs accessing global variables frequently it could be noticeable
Just in case you're taking an informal poll, I think that this relatively small overhead would be more than compensated by the gains to be had on multicore processors which are now nearly ubiquitous, even on mobile devices.
warmest regards,
Ralph
Raffael Cavallaro raffaelcavallaro@me.com