Maybe the right answer is "Don't do this in Gambit", but I'd like to give it a try:
I'm writing an application, in Gambit. It does OpenGL graphics. It runs at 100fps. It's interpreted.
Now, put down the pitch fork -- the only thing it's doing at 100Hz is for 20 different objects glLoadIdentity glPushMatrix some rotation glCallList glPopMatrix
This works fine, _except_ when I get hit with a gambit gc, it costs me like 70ms ... which becomes a noticable lag in my otherwise smoothly rotationg screen.
What are my options? Can I get a thread local heap? My basic usage is the following:
launch gambit app --> it opens up a glut window --> it listens on port ABCDE for new graphics primitives
in my editor window, I type some code; I send new primitives to port ABCDE; my gambit app spends a bit of time building it into a new display list
That's all. What can I do in this particular situation?
Thanks!
Afficher les réponses par date
Problem resolved: solution = (##gc)
:-)
On Sat, Jun 13, 2009 at 11:55 PM, lowly coder lowlycoder@huoyanjinjing.comwrote:
Maybe the right answer is "Don't do this in Gambit", but I'd like to give it a try:
I'm writing an application, in Gambit. It does OpenGL graphics. It runs at 100fps. It's interpreted.
Now, put down the pitch fork -- the only thing it's doing at 100Hz is for 20 different objects glLoadIdentity glPushMatrix some rotation glCallList glPopMatrix
This works fine, _except_ when I get hit with a gambit gc, it costs me like 70ms ... which becomes a noticable lag in my otherwise smoothly rotationg screen.
What are my options? Can I get a thread local heap? My basic usage is the following:
launch gambit app --> it opens up a glut window --> it listens on port ABCDE for new graphics primitives
in my editor window, I type some code; I send new primitives to port ABCDE; my gambit app spends a bit of time building it into a new display list
That's all. What can I do in this particular situation?
Thanks!
lowly coder wrote:
Maybe the right answer is "Don't do this in Gambit", but I'd like to give it a try:
I'm writing an application, in Gambit. It does OpenGL graphics. It runs at 100fps. It's interpreted.
Now, put down the pitch fork -- the only thing it's doing at 100Hz is for 20 different objects glLoadIdentity glPushMatrix some rotation glCallList glPopMatrix
This works fine, _except_ when I get hit with a gambit gc, it costs me like 70ms ... which becomes a noticable lag in my otherwise smoothly rotationg screen.
If your gc costs you about 70ms, that means that your probably allocating alot of data in the heap (probably some closures?). You might want to try to limit your memory allocation to, in the end, shorten your gc times.
Also, if you have big chunks of static data, ideally stuffed a flat format like in a vector or something, then you can also do a (##stil-copy obj) such that the gc will not move it in the heap after each collection. Of course, this wont work for list like structures, but should work fine for big flat define-type instances.
David
In the code running at 100fps, I do a minimal amount of allocating. It's practiaclly:
(vector-for-map *some-vec* draw-object)
(define (draw-object x) (call some gl-rotate / gl-translate) (gl/CallList x))
However, I do have _ALOT_ of static data lying around. In fact, I have large geometric models (from which I derive the open gl display lists) lying around in memory. They're vectors of lists / other vectors / other lists / ... of points / quads.
This data also doesn't change, except at _very_ predefined locations.
I guess if I can do somethign like:
(##gc) ( somehow tell gambit that the data currently left over is mostly static? )
... continue running ...
That sould be ideal.
why does vector vs list mater much for ##still-copy ?
Thanks!
On Sun, Jun 14, 2009 at 5:34 AM, David St-Hilaire <sthilaid@iro.umontreal.ca
wrote:
lowly coder wrote:
Maybe the right answer is "Don't do this in Gambit", but I'd like to give
it
a try:
I'm writing an application, in Gambit. It does OpenGL graphics. It runs
at
100fps. It's interpreted.
Now, put down the pitch fork -- the only thing it's doing at 100Hz is for 20 different objects glLoadIdentity glPushMatrix some rotation glCallList glPopMatrix
This works fine, _except_ when I get hit with a gambit gc, it costs me
like
70ms ... which becomes a noticable lag in my otherwise smoothly rotationg screen.
If your gc costs you about 70ms, that means that your probably allocating alot of data in the heap (probably some closures?). You might want to try to limit your memory allocation to, in the end, shorten your gc times.
Also, if you have big chunks of static data, ideally stuffed a flat format like in a vector or something, then you can also do a (##stil-copy obj) such that the gc will not move it in the heap after each collection. Of course, this wont work for list like structures, but should work fine for big flat define-type instances.
David
On 14-Jun-09, at 9:35 AM, lowly coder wrote:
In the code running at 100fps, I do a minimal amount of allocating. It's practiaclly:
(vector-for-map *some-vec* draw-object)
(define (draw-object x) (call some gl-rotate / gl-translate) (gl/CallList x))
However, I do have _ALOT_ of static data lying around. In fact, I have large geometric models (from which I derive the open gl display lists) lying around in memory. They're vectors of lists / other vectors / other lists / ... of points / quads.
This data also doesn't change, except at _very_ predefined locations.
I guess if I can do somethign like:
(##gc) ( somehow tell gambit that the data currently left over is mostly static? )
... continue running ...
That sould be ideal.
why does vector vs list mater much for ##still-copy ?
Thanks!
Are you using plain Scheme vectors to store your numeric data? Here it will pay off to use f32vectors, f64vectors or any homogeneous numerical vectors. That's because the content of these vectors are not scanned by the garbage collector. As I said in a previous message, if these vectors are still objects they will not be moved by the garbage collector, which further reduces the time needed. You can force an object to be still by passing it to ##still-copy. For example:
(define v (f32vector 1. 2. 3.)) (define v2 (##still-copy v)) (list v v2)
(#f32(1. 2. 3.) #f32(1. 2. 3.))
(##still-copy (list 1 2 3))
(1 2 3)
Note that in the last call to ##still-copy, a still copy is only created for the pair at the head of the list. In other words ##still- copy does a shallow copy. If you want a deep copy you have to code it yourself. That's why David said ##still-copy is less useful for lists.
An alternative is to store the data as C data, and use the FFI to access it. The difficulty level will depend on the data and how you manipulate it.
If you want to allocate "constant Scheme data" the only option right now is to create a Scheme file like:
(define my-constant-data '#f32(1.0 2.0 3.0))
then compile the file with gsc and load the object file into your running application.
I'm working on a solution for allocating constant data at run time, but it is low on my TODO list.
Marc
Does Gambit have a counter for "number of live, non-still objects?" Essentially this is what I need to minimize now, and being able to benchmark it will be helpful.
Thanks!
On Sun, Jun 14, 2009 at 7:19 AM, Marc Feeley feeley@iro.umontreal.cawrote:
On 14-Jun-09, at 9:35 AM, lowly coder wrote:
In the code running at 100fps, I do a minimal amount of allocating. It's
practiaclly:
(vector-for-map *some-vec* draw-object)
(define (draw-object x) (call some gl-rotate / gl-translate) (gl/CallList x))
However, I do have _ALOT_ of static data lying around. In fact, I have large geometric models (from which I derive the open gl display lists) lying around in memory. They're vectors of lists / other vectors / other lists / ... of points / quads.
This data also doesn't change, except at _very_ predefined locations.
I guess if I can do somethign like:
(##gc) ( somehow tell gambit that the data currently left over is mostly static? )
... continue running ...
That sould be ideal.
why does vector vs list mater much for ##still-copy ?
Thanks!
Are you using plain Scheme vectors to store your numeric data? Here it will pay off to use f32vectors, f64vectors or any homogeneous numerical vectors. That's because the content of these vectors are not scanned by the garbage collector. As I said in a previous message, if these vectors are still objects they will not be moved by the garbage collector, which further reduces the time needed. You can force an object to be still by passing it to ##still-copy. For example:
(define v (f32vector 1. 2. 3.)) (define v2 (##still-copy v)) (list v v2)
(#f32(1. 2. 3.) #f32(1. 2. 3.))
(##still-copy (list 1 2 3))
(1 2 3)
Note that in the last call to ##still-copy, a still copy is only created for the pair at the head of the list. In other words ##still-copy does a shallow copy. If you want a deep copy you have to code it yourself. That's why David said ##still-copy is less useful for lists.
An alternative is to store the data as C data, and use the FFI to access it. The difficulty level will depend on the data and how you manipulate it.
If you want to allocate "constant Scheme data" the only option right now is to create a Scheme file like:
(define my-constant-data '#f32(1.0 2.0 3.0))
then compile the file with gsc and load the object file into your running application.
I'm working on a solution for allocating constant data at run time, but it is low on my TODO list.
Marc
On 23-Jun-09, at 2:53 AM, lowly coder wrote:
Does Gambit have a counter for "number of live, non-still objects?" Essentially this is what I need to minimize now, and being able to benchmark it will be helpful.
Try running gsi -:d2 . It will give you GC reports of the form:
*** GC: 1 ms, 30.7M alloc, 199K heap, 41.1K live (21% 27144+14936)
In parentheses you have the number of bytes allocated for movable objects and for nonmovable objects (still and permanent).
You can also get this information by calling (##process-statistics). The last two numbers in the vector are the number of bytes allocated for movable objects and for nonmovable objects. For details look for the call to ##process-statistics in the file lib/_nonstd.scm .
Marc
On 14-Jun-09, at 2:55 AM, lowly coder wrote:
I'm writing an application, in Gambit. It does OpenGL graphics. It runs at 100fps. It's interpreted.
Now, put down the pitch fork -- the only thing it's doing at 100Hz is for 20 different objects glLoadIdentity glPushMatrix some rotation glCallList glPopMatrix
This works fine, _except_ when I get hit with a gambit gc, it costs me like 70ms ... which becomes a noticable lag in my otherwise smoothly rotationg screen.
What are my options?
It sounds like the problem is that your live data is big. Try running the program with the -:d2 option to see the GC statistics which look like this:
*** GC: 1 ms, 185K alloc, 198K heap, 30.6K live (15% 16640+14700)
Here the program has 30.6K of live data. A rule of thumb is that, on a typical desktop computer, each megabyte of live data will cost 2 milliseconds of garbage collection time. So it would seem that your program has about 30 megabytes of live data or so.
Note that "interpreted code" is considered to be data, because the interpreted code is represented with Scheme data (closures, vectors, lists, etc). So if your code base is big, compiling your program may reduce the live data and thus reduce the length of the garbage collection pauses. Note that in Gambit, you can mix interpreted and compiled code. Moreover, you can redefine functions (at the REPL or with load) so that the new definition of these functions is interpreted (i.e. you don't lose the ability to debug the code).
Another thing to do is to allocate program data sparingly, or use a compact representation. For data that is used infrequently, it is sometimes possible to serialize the data (into a string or a u8vector), then save it away (to a global variable or the file system), and then deserialize the string when you need to access or modify the data. This is a win because the content of strings and u8vectors is not scanned by the garbage collector, and large strings and u8vectors are not moved by the garbage collector (they are "still" objects).
Finally, you can start the application with a large heap with the -:m runtime option. Although this does not affect the duration of the GC pause, it will decrease its frequency of occurrence.
Marc