On 20-Jun-07, at 12:49 AM, naruto canada wrote:
On 6/19/07, Marc Feeley feeley@iro.umontreal.ca wrote:
On 19-Jun-07, at 9:59 AM, naruto canada wrote:
gambit-c seems to have problem with "load". I was able to load a test file 26meg with petite and mzscheme, but crashes gambit-c. (gsi)
By "crash" do you mean a segment violation or a "heap overflow" message?
I think it's just out of memory, "top" showing fast increasing memory usage.
If it didn't output a "heap overflow" message, my guess is that Gambit is still computing but with such a large heap that it is very slow due to thrashing.
By my calculations it will require roughly 500 MB of RAM to load a 26
I think it takes more than 500M, I have 1G of memory, and the program crashes before loading.
You might be able to avoid thrashing on your machine by specifying a maximum heap size with the -:hXXX option. My test on a 1GB RAM machine seems to indicate that loading the 26MB file of integer sequences generates a peak of about 630MB of live data. So this should work:
gsi -:h700000 test.scm
To get the heap usage (live data, etc), do:
gsi -:h700000 -e '(gc-report-set! #t)' test.scm
I know the problem can be got around by simply doing "cat code.sc| gsi". I don't know if it's possible to treat "load" simply as another input stream like petite or mzscheme.
No. Gambit's model is that files loaded by "load" are complete "modules". The module has to be parsed (to check for lexical and syntax errors) and macro-expanded before it can execute. This is so loading an interpreted file has (roughly) the same semantics as loading a compiled file.
What if someone needs to write to test cases, based on the execution of a large file then gambit-c won't be able handle it "in-code" sorta speak, it has to "cat" the test case after "code.sc", just not pretty but workable. anyway, it's not a serious problem.
I think a better approach is to treat the sequences as "data". Then you just need a small driver program that does:
(define (go) (with-input-from-file "sequences.dat" (lambda () (let loop () (let ((s (read))) (if (not (eof-object? s)) (begin (solve-seq s) (loop))))))))
(go)
Marc
Afficher les réponses par date
Speaking of heaps I can't get gambit to let me (make-vector 10000000) with any heap size...
% unlimit % gsi -:h4000000 Gambit Version 4.0 beta 22
(make-vector 10000000)
*** ERROR IN ##make-vector -- Heap overflow 1>
Any ideas?
On 6/20/07, Marc Feeley feeley@iro.umontreal.ca wrote:
On 20-Jun-07, at 12:49 AM, naruto canada wrote:
On 6/19/07, Marc Feeley feeley@iro.umontreal.ca wrote:
On 19-Jun-07, at 9:59 AM, naruto canada wrote:
gambit-c seems to have problem with "load". I was able to load a test file 26meg with petite and mzscheme, but crashes gambit-c. (gsi)
By "crash" do you mean a segment violation or a "heap overflow" message?
I think it's just out of memory, "top" showing fast increasing memory usage.
If it didn't output a "heap overflow" message, my guess is that Gambit is still computing but with such a large heap that it is very slow due to thrashing.
By my calculations it will require roughly 500 MB of RAM to load a 26
I think it takes more than 500M, I have 1G of memory, and the program crashes before loading.
You might be able to avoid thrashing on your machine by specifying a maximum heap size with the -:hXXX option. My test on a 1GB RAM machine seems to indicate that loading the 26MB file of integer sequences generates a peak of about 630MB of live data. So this should work:
gsi -:h700000 test.scm
To get the heap usage (live data, etc), do:
gsi -:h700000 -e '(gc-report-set! #t)' test.scm
I know the problem can be got around by simply doing "cat code.sc| gsi". I don't know if it's possible to treat "load" simply as another input stream like petite or mzscheme.
No. Gambit's model is that files loaded by "load" are complete "modules". The module has to be parsed (to check for lexical and syntax errors) and macro-expanded before it can execute. This is so loading an interpreted file has (roughly) the same semantics as loading a compiled file.
What if someone needs to write to test cases, based on the execution of a large file then gambit-c won't be able handle it "in-code" sorta speak, it has to "cat" the test case after "code.sc", just not pretty but workable. anyway, it's not a serious problem.
I think a better approach is to treat the sequences as "data". Then you just need a small driver program that does:
(define (go) (with-input-from-file "sequences.dat" (lambda () (let loop () (let ((s (read))) (if (not (eof-object? s)) (begin (solve-seq s) (loop))))))))
(go)
Marc
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On 20-Jun-07, at 9:12 AM, |/|/ Bendick wrote:
Speaking of heaps I can't get gambit to let me (make-vector 10000000) with any heap size...
% unlimit % gsi -:h4000000 Gambit Version 4.0 beta 22
(make-vector 10000000)
*** ERROR IN ##make-vector -- Heap overflow 1>
Any ideas?
In memory, objects are encoded as a sequence of machine words. The first machine word is the header. It encodes some GC information and the object's type (in the lower 8 bits of the word) and it uses the remaining bits (24 on a 32 bit machine, or 56 on a 64 bit machine) to encode the length of the object in bytes.
This means that on a 32 bit machine the largest vector is just shy of 2^24 bytes, or 2^22 words, i.e. roughly 4 million elements. On a 64 bit machine the virtual memory is the only limit.
Marc
Ahh.. enlightening. If that is the case shouldn't (define big (expt 2 (expt 2 25))) fail? (It does not).
On 6/20/07, Marc Feeley feeley@iro.umontreal.ca wrote:
On 20-Jun-07, at 9:12 AM, |/|/ Bendick wrote:
Speaking of heaps I can't get gambit to let me (make-vector 10000000) with any heap size...
% unlimit % gsi -:h4000000 Gambit Version 4.0 beta 22
(make-vector 10000000)
*** ERROR IN ##make-vector -- Heap overflow 1>
Any ideas?
In memory, objects are encoded as a sequence of machine words. The first machine word is the header. It encodes some GC information and the object's type (in the lower 8 bits of the word) and it uses the remaining bits (24 on a 32 bit machine, or 56 on a 64 bit machine) to encode the length of the object in bytes.
This means that on a 32 bit machine the largest vector is just shy of 2^24 bytes, or 2^22 words, i.e. roughly 4 million elements. On a 64 bit machine the virtual memory is the only limit.
Marc
On Jun 20, 2007, at 10:02 AM, |/|/ Bendick wrote:
Ahh.. enlightening. If that is the case shouldn't (define big (expt 2 (expt 2 25))) fail? (It does not).
It's just shy of 2^24 *bytes*:
(integer-length (expt 2 (expt 2 26)))
67108865
(integer-length (expt 2 (expt 2 27)))
*** ERROR IN ##bignum.make -- Heap overflow
Brad
*sigh* Looks like my A in discrete was for attendance.
On 6/20/07, Bradley Lucier lucier@math.purdue.edu wrote:
On Jun 20, 2007, at 10:02 AM, |/|/ Bendick wrote:
Ahh.. enlightening. If that is the case shouldn't (define big (expt 2 (expt 2 25))) fail? (It does not).
It's just shy of 2^24 *bytes*:
(integer-length (expt 2 (expt 2 26)))
67108865
(integer-length (expt 2 (expt 2 27)))
*** ERROR IN ##bignum.make -- Heap overflow
Brad
Is it possible to have gambit use two words for it's headers on a 32-bit machine?
On 6/20/07, Marc Feeley feeley@iro.umontreal.ca wrote:
On 20-Jun-07, at 9:12 AM, |/|/ Bendick wrote:
Speaking of heaps I can't get gambit to let me (make-vector 10000000) with any heap size...
% unlimit % gsi -:h4000000 Gambit Version 4.0 beta 22
(make-vector 10000000)
*** ERROR IN ##make-vector -- Heap overflow 1>
Any ideas?
In memory, objects are encoded as a sequence of machine words. The first machine word is the header. It encodes some GC information and the object's type (in the lower 8 bits of the word) and it uses the remaining bits (24 on a 32 bit machine, or 56 on a 64 bit machine) to encode the length of the object in bytes.
This means that on a 32 bit machine the largest vector is just shy of 2^24 bytes, or 2^22 words, i.e. roughly 4 million elements. On a 64 bit machine the virtual memory is the only limit.
Marc
On 20-Jun-07, at 4:45 PM, |/|/ Bendick wrote:
Is it possible to have gambit use two words for it's headers on a 32-bit machine?
Anything is possible of course. But this adds extra space and time overheads on vector-like object operations (strings, u8vectors, vectors, ...). I don't think its worth the trouble (after all very few people complain about this limitation). An alternative that is less costly to implement is to encode the length in number of *words* instead of bytes. That way you could have vectors with 2^24-1 elements, which is 4 times larger than the current 2^22-1 element limit.
Marc
Marc Feeley wrote:
On 20-Jun-07, at 4:45 PM, |/|/ Bendick wrote:
Is it possible to have gambit use two words for it's headers on a 32-bit machine?
What about using a vector of vectors? This would require one more level of indirection, but that's quite cheap(*), doesn't have any relevant limit, and doesn't require changes to the core.
(* structures defined with define-type/-structure are notoriously slow when compiled safely, but that's because of the type checking or something (could that be sped up, Marc?). You could just use a special symbol in the first position of the outer level vector for your type checking, so that you don't need more nesting, this way it should be really fast.)
For homogenous vectors, you could also use the ffi (with c-lambda or ##c-code) and malloc for writing your own representation (should only be some 30-50 lines of code or so).
Anything is possible of course. But this adds extra space and time overheads on vector-like object operations (strings, u8vectors, vectors, ...). I don't think its worth the trouble (after all very few people complain about this limitation). An alternative that is less costly to implement is to encode the length in number of *words* instead of bytes.
(I'm using ___HD_BYTES(___HEADER(obj)) in some places to determine the length of an object in bytes.)
Christian.
Replying to myself:
For homogenous vectors, you could also use the ffi (with c-lambda or ##c-code) and malloc for writing your own representation (should only be some 30-50 lines of code or so).
(Assuming you add a finalizer for free'ing the memory, of course.)
One issue to note is that the gc isn't aware of the big malloc'ed chunks, so doesn't "feel" the memory pressure if you're allocating many huge ffi vectors, so you're risking running out of memory soon. I wonder what the best solution to this problem is. One possibility would be to run ##gc once if malloc fails. That will trigger full gc's quite often (but honestly I'm still not clear whether Gambit employs a generational gc; movable objects are still movable after a gc (but there might be a hierarchy of movable object areas); so I don't know whether ##gc is more costly than a normal incremental collection). Another way could be some hook in the gc to inform it of the ffi memory.
(BTW the same issue arises when filehandles are running out because someone's not explicitely closing input ports; if filesystem calls are failing because of having run out of filehandles, a gc could be triggered and then the operation been retried once. This can have some importance when wrapping input ports as streams (lazy lists), because only a finalizer will close the port if a stream isn't exhausted (read to the end). Currently I'm ignoring this problem, thinking I could catch exceptions when opening and calling ##gc as mentioned above, but I wonder if it wouldn't be more efficient if there's some ##some-gc procedure which does clean up just the first generation, and maybe put that into the core instead of trapping exceptions in the library.)
Christian.