Dear all, I have been using Gambit to write programs for my research, but recently I encounter a problem. When I run some programs, I get this error:
*** ERROR IN ##rest-param-check-heap -- Heap overflow 1> ,b 0 ##rest-param-check-heap 1 trace-back-path 2 find-seq 3 find-seq 4 find-motif-in-occ-list 5 find-motif-in-occ-list 6 find-motif-in-occ-list 7 find-motif-in-fasta-pruned 8 find-motif-in-fasta-pruned 9 find-motif-in-fasta-pruned ... 12 with-output-to-file 13 for-each 14 for-each 15 (interaction) (stdin)@69:1 (for-each (lambda (errL) ... 1> ,t
I am using Gambit-C v4.4.4 on Windows XP with 1.95 GB ram. I have tried rebooting and run again, and have added a call to (##gc) for each run like this:
(for errL '(0.2 0.3) (for s (append game-data-sets tompa-data-sets) (##gc) (with-output-to-file (string-append "motif_finder_res/game_tompa/" s "_g_e_" (number->string errL) "_res.txt") (f0 (find-motif-in-fasta (codify-fasta (read-fasta s) DNA-code) 5 20 (k-best-keeper 50 better-motif) errL)))))
But the same problem arises, and Windows XP says virtual memory low. Is this a problem of Gambit or Windows XP? Is the garbage collector a mark and sweep one and fragmentation occurrs?
Regards, Peter
Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk.promo.yahoo.com/security/ 了解更多!
Afficher les réponses par date
On 6-Sep-09, at 11:54 PM, peter lo wrote:
Dear all, I have been using Gambit to write programs for my research, but recently I encounter a problem. When I run some programs, I get this error:
*** ERROR IN ##rest-param-check-heap -- Heap overflow 1> ,b 0 ##rest-param-check-heap 1 trace-back-path 2 find-seq 3 find-seq 4 find-motif-in-occ-list 5 find-motif-in-occ-list 6 find-motif-in-occ-list 7 find-motif-in-fasta-pruned 8 find-motif-in-fasta-pruned 9 find-motif-in-fasta-pruned ... 12 with-output-to-file 13 for-each 14 for-each 15 (interaction) (stdin)@69:1 (for-each (lambda (errL) ... 1> ,t
A ,be would have been more informative, to see the variables in each frame (to have an idea of the type of data you are manipulating).
I am using Gambit-C v4.4.4 on Windows XP with 1.95 GB ram. I have tried rebooting and run again, and have added a call to (##gc) for each run like this:
(for errL '(0.2 0.3) (for s (append game-data-sets tompa-data-sets) (##gc) (with-output-to-file (string-append "motif_finder_res/game_tompa/" s "_g_e_" (number->string errL) "_res.txt") (f0 (find-motif-in-fasta (codify-fasta (read-fasta s) DNA-code) 5 20 (k-best-keeper 50 better-motif) errL)))))
Calling (##gc) explicitly will not help. One way to get more information on what is happening is to add the -:d2 option when you start gsi . This will give you a trace of the GCs that occur, and the size of the heap, size of the live objects, etc.
But the same problem arises, and Windows XP says virtual memory low. Is this a problem of Gambit or Windows XP? Is the garbage collector a mark and sweep one and fragmentation occurrs?
Don't worry about the GC algorithm. Most objects are allocated in an area that is compacted.
My guess is that your program has a memory leak. It may be due to the definition of the "for" macro above. Have you tried compiling your program (I advise using v4.5.1 which has a -exe option to create standalone executables).
Marc
Thanks for the reply. I have updated the Gambit-C to v4.5.1, changed the vectors to homogeneous s8vectors hopefully to reduce the memory usage, but the same problem arises. I am using gsc, but I am compiling a number of scm files as .o files and load them dynamically into the REPL, is it possible that this is causing the problem?
I will check my program more closely to see if there are any parts that is holding too much memory.
As for the possibility of memory leaks, this puzzles me. In a language with garbage collection, what does it mean to have memory leaks?
Regards, Peter
----- 郵件原件 ---- 寄件人﹕ Marc Feeley feeley@iro.umontreal.ca 收件人 peter lo peter19852001@yahoo.com.hk 副本(CC) gambit-list@iro.umontreal.ca 傳送日期﹕ 2009 年 9月 9 日 星期三 上午 12:03:25 主題: Re: [gambit-list] Regarding garbage collection in Windows XP
On 6-Sep-09, at 11:54 PM, peter lo wrote:
Dear all, I have been using Gambit to write programs for my research, but recently I encounter a problem. When I run some programs, I get this error:
*** ERROR IN ##rest-param-check-heap -- Heap overflow 1> ,b 0 ##rest-param-check-heap 1 trace-back-path 2 find-seq 3 find-seq 4 find-motif-in-occ-list 5 find-motif-in-occ-list 6 find-motif-in-occ-list 7 find-motif-in-fasta-pruned 8 find-motif-in-fasta-pruned 9 find-motif-in-fasta-pruned ... 12 with-output-to-file 13 for-each 14 for-each 15 (interaction) (stdin)@69:1 (for-each (lambda (errL) ... 1> ,t
A ,be would have been more informative, to see the variables in each frame (to have an idea of the type of data you are manipulating).
I am using Gambit-C v4.4.4 on Windows XP with 1.95 GB ram. I have tried rebooting and run again, and have added a call to (##gc) for each run like this:
(for errL '(0.2 0.3) (for s (append game-data-sets tompa-data-sets) (##gc) (with-output-to-file (string-append "motif_finder_res/game_tompa/" s "_g_e_" (number->string errL) "_res.txt") (f0 (find-motif-in-fasta (codify-fasta (read-fasta s) DNA-code) 5 20 (k-best-keeper 50 better-motif) errL)))))
Calling (##gc) explicitly will not help. One way to get more information on what is happening is to add the -:d2 option when you start gsi . This will give you a trace of the GCs that occur, and the size of the heap, size of the live objects, etc.
But the same problem arises, and Windows XP says virtual memory low. Is this a problem of Gambit or Windows XP? Is the garbage collector a mark and sweep one and fragmentation occurrs?
Don't worry about the GC algorithm. Most objects are allocated in an area that is compacted.
My guess is that your program has a memory leak. It may be due to the definition of the "for" macro above. Have you tried compiling your program (I advise using v4.5.1 which has a -exe option to create standalone executables).
Marc
Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk.promo.yahoo.com/security/ 了解更多!
2009/9/9 peter lo peter19852001@yahoo.com.hk:
As for the possibility of memory leaks, this puzzles me. In a language with garbage collection, what does it mean to have memory leaks?
We'd need a bit more informations about your code, but "in a language that you can interface with C libraries, some allocations are not done by the GC". So, if you happen to allocate in a C library, there might be your leak.
Is your project 100% Scheme code? Do you use any extra code, such as OpenGL, a SQL ? Is it portable to Linux/BSD/MacOS? Does the heap overflow subsist there?
Can you reduce your problematic code to approximatively less than 50 lines and post it here so that others can try to reproduce it?
Cheers,
P!
2009/9/9 peter lo peter19852001@yahoo.com.hk:
I am compiling a number of scm files as .o files and load them dynamically into the REPL, is it possible that this is causing the problem?
Not likely.
I will check my program more closely to see if there are any parts that is holding too much memory.
As for the possibility of memory leaks, this puzzles me. In a language with garbage collection, what does it mean to have memory leaks?
Exactly what you said yourself. Code that inadvertantly keeps memory around. Heavy use of symbols (in the sense of memoized strings) is probably the easiest way to leak memory. Just about anything that is linked to a a global data structure is a good candidate for leaking memory.
If the garbage collector thinks your program could possibly ever use a piece of memory again, it will not free that memory. The collector is way more conservative than your brain, so it is certainly possible to fool yourself about whether or not you *intend* to use data when you have in fact kept it around.
david rush
On 9-Sep-09, at 5:36 AM, David Rush wrote:
As for the possibility of memory leaks, this puzzles me. In a language with garbage collection, what does it mean to have memory leaks?
Exactly what you said yourself. Code that inadvertantly keeps memory around. Heavy use of symbols (in the sense of memoized strings) is probably the easiest way to leak memory. Just about anything that is linked to a a global data structure is a good candidate for leaking memory.
If the garbage collector thinks your program could possibly ever use a piece of memory again, it will not free that memory. The collector is way more conservative than your brain, so it is certainly possible to fool yourself about whether or not you *intend* to use data when you have in fact kept it around.
Nice explanation.
Simply: garbage collection solves the "dangling pointer" problem completely, and it only helps with the "memory leak" problem because you can write programs that keep references to data that will never be used by the program. Garbage collectors are "conservative" in the sense that they use "reachability" to determine "usefullness" of data.
Marc
Dear all, Thanks for the replies and clarifications. After some investigations, I believe that the cause of heap overflow is not memory leakage. The whole program is in Scheme, so it cannot be leakage due to external C libraries. And I have checked the program, it seems that there are no "really useless" data hanging around that cannot be garbage collected. The real reason is simply that the input data size is too large, so there are too many intermediate results, therefore needing > 1.6 GB of live memory.
It was my fault, as I do not expect the program to need that much memory, so the representation of the intermediate result is not particularly compact. Previously I used a structure with 6 members, but in fact I really need 2 of them to do the computations. After I have changed the representation to use a simple cons cell instead, the program seem to manage to continue to run using only a peak amount of 1700MB ram.
Another thing that I have noticed is that the system holds at most ~1700MB ram for heap, even though 100% of them are live objects, is this by design or just an accident in Windows XP? When the precentage of live objects gets closer to 100%, the GC's become more frequent as each time there is little memory reclaimed, and each one takes around 2 seconds, because a lot of memory objects are examined to determine the reachability, therefore the system is doing less useful work. Now I am trying to reduce the allocation of short-lived data, so that there will be less GC's. I am also considering switching to a linux system.
Anyway, thanks for the help and sorry for bothering you.
Thanks. Peter
----- 郵件原件 ---- 寄件人﹕ Marc Feeley feeley@iro.umontreal.ca 收件人 David Rush kumoyuki@gmail.com 副本(CC) peter lo peter19852001@yahoo.com.hk; gambit-list@iro.umontreal.ca 傳送日期﹕ 2009 年 9月 9 日 星期三 下午 8:11:51 主題: Re: ReĄG [gambit-list] Regarding garbage collection
On 9-Sep-09, at 5:36 AM, David Rush wrote:
As for the possibility of memory leaks, this puzzles me. In a language with garbage collection, what does it mean to have memory leaks?
Exactly what you said yourself. Code that inadvertantly keeps memory around. Heavy use of symbols (in the sense of memoized strings) is probably the easiest way to leak memory. Just about anything that is linked to a a global data structure is a good candidate for leaking memory.
If the garbage collector thinks your program could possibly ever use a piece of memory again, it will not free that memory. The collector is way more conservative than your brain, so it is certainly possible to fool yourself about whether or not you *intend* to use data when you have in fact kept it around.
Nice explanation.
Simply: garbage collection solves the "dangling pointer" problem completely, and it only helps with the "memory leak" problem because you can write programs that keep references to data that will never be used by the program. Garbage collectors are "conservative" in the sense that they use "reachability" to determine "usefullness" of data.
Marc
Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk.promo.yahoo.com/security/ 了解更多!
On 9-Sep-09, at 8:51 AM, peter lo wrote:
Dear all, Thanks for the replies and clarifications. After some investigations, I believe that the cause of heap overflow is not memory leakage. The whole program is in Scheme, so it cannot be leakage due to external C libraries. And I have checked the program, it seems that there are no "really useless" data hanging around that cannot be garbage collected. The real reason is simply that the input data size is too large, so there are too many intermediate results, therefore needing > 1.6 GB of live memory.
It was my fault, as I do not expect the program to need that much memory, so the representation of the intermediate result is not particularly compact. Previously I used a structure with 6 members, but in fact I really need 2 of them to do the computations. After I have changed the representation to use a simple cons cell instead, the program seem to manage to continue to run using only a peak amount of 1700MB ram.
Another thing that I have noticed is that the system holds at most ~1700MB ram for heap, even though 100% of them are live objects, is this by design or just an accident in Windows XP? When the precentage of live objects gets closer to 100%, the GC's become more frequent as each time there is little memory reclaimed, and each one takes around 2 seconds, because a lot of memory objects are examined to determine the reachability, therefore the system is doing less useful work. Now I am trying to reduce the allocation of short-lived data, so that there will be less GC's. I am also considering switching to a linux system.
By default Gambit's memory management system resizes the heap so that after a GC there is 50% of the heap that is occupied by live objects (this can be changed with the -:lXXX runtime option). If the resizing requires that the heap grow, then the runtime will allocate new "movable sections" (which are 512 Kbytes each) by calling the C "malloc" function. If malloc returns NULL, indicating that no more memory is available to the process, then the system will keep on running, but with more that 50% of the heap occupied by live objects. If you are at 100% live occupation the GCs will be very frequent and very little useful computation will occur compared to the garbage collections. Consider yourself lucky that the program managed to finish executing!
Solutions? Use a more compact data representation (you have started doing that). Use "still" objects (i.e. ##still-copy) that are more compactly represented when objects have 5 fields or more. Enable virtual memory (but then your system will slow down due to swapping). Buy more RAM. Switch from a 64 bit system to a 32 bit system (if you are using a 64 bit system).
Marc
Peter, It be interesting to see your program, or at least the part cranks up the mem usage so much. Maybe some of the people on the list would be able to give you a hint on how better to represent/manipulate yourd data.
This thread has been going on for some time now and I feel that all the general, or theoretical if you will, answers have been given. It'll probably be time to show some relevant code if you continue having trouble in the future.
Pavel
On Wed, Sep 9, 2009 at 6:32 AM, Marc Feeleyfeeley@iro.umontreal.ca wrote:
On 9-Sep-09, at 8:51 AM, peter lo wrote:
Dear all, Thanks for the replies and clarifications. After some investigations, I believe that the cause of heap overflow is not memory leakage. The whole program is in Scheme, so it cannot be leakage due to external C libraries. And I have checked the program, it seems that there are no "really useless" data hanging around that cannot be garbage collected. The real reason is simply that the input data size is too large, so there are too many intermediate results, therefore needing > 1.6 GB of live memory.
It was my fault, as I do not expect the program to need that much memory, so the representation of the intermediate result is not particularly compact. Previously I used a structure with 6 members, but in fact I really need 2 of them to do the computations. After I have changed the representation to use a simple cons cell instead, the program seem to manage to continue to run using only a peak amount of 1700MB ram.
Another thing that I have noticed is that the system holds at most ~1700MB ram for heap, even though 100% of them are live objects, is this by design or just an accident in Windows XP? When the precentage of live objects gets closer to 100%, the GC's become more frequent as each time there is little memory reclaimed, and each one takes around 2 seconds, because a lot of memory objects are examined to determine the reachability, therefore the system is doing less useful work. Now I am trying to reduce the allocation of short-lived data, so that there will be less GC's. I am also considering switching to a linux system.
By default Gambit's memory management system resizes the heap so that after a GC there is 50% of the heap that is occupied by live objects (this can be changed with the -:lXXX runtime option). If the resizing requires that the heap grow, then the runtime will allocate new "movable sections" (which are 512 Kbytes each) by calling the C "malloc" function. If malloc returns NULL, indicating that no more memory is available to the process, then the system will keep on running, but with more that 50% of the heap occupied by live objects. If you are at 100% live occupation the GCs will be very frequent and very little useful computation will occur compared to the garbage collections. Consider yourself lucky that the program managed to finish executing!
Solutions? Use a more compact data representation (you have started doing that). Use "still" objects (i.e. ##still-copy) that are more compactly represented when objects have 5 fields or more. Enable virtual memory (but then your system will slow down due to swapping). Buy more RAM. Switch from a 64 bit system to a 32 bit system (if you are using a 64 bit system).
Marc
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
As the code is very long and spans a number of files, maybe I only post a fragment of it. I will explain just a bit what I am doing. Basically given a number of input DNA sequences, I would like to find "similar" (in terms of edit distance) subsequences and rank them using a scoring scheme. The length of subsequences range from 5 to 20. Previously, for each subsequence length, I go through all the subsequences, find their similar occurrences, and rank them. But this is too slow, so I use a heuristic, which is to record the similar occurrences of each subsequence of length L, then using them to refine the search for subsequences of length L+1. This is the part that is causing trouble. As the total length of input sequences increase and L increase, the number of unique subsequences increase very rapidly, and since each of them keeps a list of occurrences, the amount of live objects when going from length L to length L+1 can get very large.
As I have mentioned earlier, previously I use a structure to keep the occurrences, so too much memory is used, now I use a cons cell for the occurrences when going from L to L+1, the amount of memory used is reduced by a significant fraction, but the basic problem is in the heuristic itself.
I have thought of doing things in a different order, which is to push the loop of L from 5 to 20 inner than before, then I need not keep that many occurrences, but then there may be duplicated work.
By the way, I am currently trying to reduce the allocation of short-lived objects, I am sure there are general tips on the web about this, but I am not sure if there are tips more specific to Gambit Scheme, taking into account its garbage collector.
Regards, Peter
;;; Here is the code ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; Note that for a pattern of length L to match with score >= threshold, its subpatterns ;;; must also match with score >= threshold, therefore, when going from length L to L+1, ;;; for a given pattern, we use the occurrences of its L subpattern to reduce some search. ;;; For simplicity, we check at the occurrences of one of its L subpattern
;;; Since there are too many occurrences, the occurrence structure takes too much space, ;;; now try to use a single cons cell, to record the minimum information (define (make-trimed-occ seq-no pos) (cons seq-no pos)) (define (trimed-occ-seq-no oc) (car oc)) (define (trimed-occ-pos oc) (cdr oc)) (define (occurrence-vlist->trimed-occ oc) (make-trimed-occ (occurrence-vlist-seq-no oc) (occurrence-vlist-start oc)))
(define (find-motif-by-pattern pat fasta errL bg) ;; pat and fasta are the codified sequence(s) of pattern and fasta respectively. ;; Use the occurrences to estimate an hmm and give score to it, return the result as a motif. (let* ((th (inflated-threshold-by-L errL (coded-seq-length pat))) (occ (map occurrence->occurrence-vlist (find-in-fasta pat fasta *score-function* th))) (g (estimate-hmm-from-occurrence-vlist occ)) (score (score-for-motif pat g occ bg))) ;(print "Motif: pat: " pat "\tscore: " score "\tocc-len: " (length occ) "\n") (make-motif g score pat occ)))
(define (find-motif-in-fasta-L fasta L keeper errL bg) ;; fasta is a codified fasta file, and L is the length of the motif ;; wanted. Goes through the length L subsequences in fasta, and for ;; each one, find a motif for the pattern, and call keeper to add ;; it to the result list. errL is the error level allowed in finding ;; similar subsequences to estimate hmm. ;; Returns the patterns hash table which contains the patterns and its ;; associated occurrences.
;; memoize for the sub-pattern, so that we don't call find-motif-by-pattern ;; for the same pattern (let ((patterns (make-table))) (for s fasta (for-n i 0 (- (coded-seq-length (cdr s)) L) (let ((pat (subcoded-seq (cdr s) i (+ i L)))) (when (not (table-ref patterns pat #f)) ;; not already in patterns, add entry (let ((m (find-motif-by-pattern pat fasta errL bg))) (keeper 'in m) (table-set! patterns pat (map occurrence-vlist->trimed-occ (motif-occs m)))))))) patterns))
(define (find-motif-in-occ-list in-fasta pat errL bg lst) ;; pat is a codified pattern, lst is an occurrence list in which we try to ;; find pat. Returns a motif. (define (subpattern seq oc L th) ;; oc is an trimed-occ, returns the subpattern with a bit longer length (let* ((i (trimed-occ-pos oc)) (e (+ i L))) (subcoded-seq seq i (min (coded-seq-length seq) (- e th))))) ; th is negative
(let* ((L (coded-seq-length pat)) (inflated-th (inflated-threshold-by-L errL L)) (th (threshold-by-L errL L)) (occ '())) (for s lst (let* ((s-seq (cdr (list-ref in-fasta (trimed-occ-seq-no s)))) (ls (find-seq pat (subpattern s-seq s L th) *score-function* inflated-th)) (oc (if (null? ls) #f (car ls)))) ;; now rectify the information in ls, if any. ;; we take at most only one occurrence from each s (when oc (set! occ (cons (make-occurrence-vlist (trimed-occ-seq-no s) s-seq (occurrence-score oc) (+ (trimed-occ-pos s) (occurrence-start oc)) (+ (trimed-occ-pos s) (occurrence-end oc)) (alignment->vertex-name (occurrence-alignment oc))) occ))))) (let* ((g (estimate-hmm-from-occurrence-vlist occ)) (score (score-for-motif pat g occ bg))) ;(print "Motif: pat: " pat "\tscore: " score "\tocc-len: " (length occ) "\n") (make-motif g score pat occ))))
(define (find-motif-in-fasta-pruned fasta L keeper errL bg sub-pats) ;; fasta is a codifed fasta file, L is the length of motif we are looking for. ;; sub-pats is a hash table associating subpatterns of length L-1 and their ;; occurrence list. For each pattern of length L in fasta, we only check the ;; positions of the occurrences of its L-1 left-most subpattern ;; Returns a new hash table associating length L patterns and its occurrence ;; list. Also memoize. (let ((patterns (make-table))) (for s fasta (for-n i 0 (- (coded-seq-length (cdr s)) L) (let ((pat (subcoded-seq (cdr s) i (+ i L)))) (when (not (table-ref patterns pat #f)) ;; not already in patterns, add entry (let ((sub-lst (table-ref sub-pats (subcoded-seq pat 0 (- L 1)) #f))) (when sub-lst (let ((m (find-motif-in-occ-list fasta pat errL bg sub-lst))) (keeper 'in m) (table-set! patterns pat (map occurrence-vlist->trimed-occ (motif-occs m)))))))))) patterns))
(define (find-motif-in-fasta fasta min-L max-L keeper errL reverse-complement?) ;; try the lengths, from short to long, and keep the good motifs by using keeper. ;; If reverse-complement? is true, then consider also the reverse-complement of ;; the input sequences, but the reported positions are always relative to the ;; original sequences.
;; Allocating two large enough buffers once, one for bufs, one for bufp in find-seq (let* ((in-fasta (if reverse-complement? (append-reverse-complement-fasta fasta DNA-code-complement) fasta)) (bg (m1-model->inexact (estimate-m1-model in-fasta 4))) (pats (find-motif-in-fasta-L in-fasta min-L keeper errL bg)) (seq-Lens (map (@ coded-seq-length cdr) in-fasta)) (max-Lens (reduce max 0 seq-Lens))) ; setup things properly (set! *g-prior* (/ 1.0 (mean seq-Lens))) (find-seq-bufs-set! (make-2d-s8array (+ max-L 1) (+ max-Lens 1) 0)) (find-seq-bufp-set! (make-2d-s8array (+ max-L 1) (+ max-Lens 1) 0)) ; (for-n i (+ min-L 1) max-L (set! pats (find-motif-in-fasta-pruned in-fasta i keeper errL bg pats))) (print-current-best-alignment keeper in-fasta errL reverse-complement? fasta) (keeper 'for-each print-motif) ; release the buffers (find-seq-bufs-set! #f) (find-seq-bufp-set! #f)))
;;;; End of code
----- 郵件原件 ---- 寄件人﹕ Pavel Dudrenov dudrenov@gmail.com 收件人 Marc Feeley feeley@iro.umontreal.ca 副本(CC) peter lo <peter19852001@yahoo.com..hk>; gambit-list@iro.umontreal.ca 傳送日期﹕ 2009 年 9月 9 日 星期三 下午 11:37:11 主題: Re: [gambit-list] Re: Regarding garbage collection
Peter, It be interesting to see your program, or at least the part cranks up the mem usage so much. Maybe some of the people on the list would be able to give you a hint on how better to represent/manipulate yourd data.
This thread has been going on for some time now and I feel that all the general, or theoretical if you will, answers have been given. It'll probably be time to show some relevant code if you continue having trouble in the future.
Pavel
On Wed, Sep 9, 2009 at 6:32 AM, Marc Feeleyfeeley@iro.umontreal.ca wrote:
On 9-Sep-09, at 8:51 AM, peter lo wrote:
Dear all, Thanks for the replies and clarifications. After some investigations, I believe that the cause of heap overflow is not memory leakage. The whole program is in Scheme, so it cannot be leakage due to external C libraries. And I have checked the program, it seems that there are no "really useless" data hanging around that cannot be garbage collected. The real reason is simply that the input data size is too large, so there are too many intermediate results, therefore needing > 1.6 GB of live memory.
It was my fault, as I do not expect the program to need that much memory, so the representation of the intermediate result is not particularly compact. Previously I used a structure with 6 members, but in fact I really need 2 of them to do the computations. After I have changed the representation to use a simple cons cell instead, the program seem to manage to continue to run using only a peak amount of 1700MB ram.
Another thing that I have noticed is that the system holds at most ~1700MB ram for heap, even though 100% of them are live objects, is this by design or just an accident in Windows XP? When the precentage of live objects gets closer to 100%, the GC's become more frequent as each time there is little memory reclaimed, and each one takes around 2 seconds, because a lot of memory objects are examined to determine the reachability, therefore the system is doing less useful work. Now I am trying to reduce the allocation of short-lived data, so that there will be less GC's. I am also considering switching to a linux system.
By default Gambit's memory management system resizes the heap so that after a GC there is 50% of the heap that is occupied by live objects (this can be changed with the -:lXXX runtime option). If the resizing requires that the heap grow, then the runtime will allocate new "movable sections" (which are 512 Kbytes each) by calling the C "malloc" function. If malloc returns NULL, indicating that no more memory is available to the process, then the system will keep on running, but with more that 50% of the heap occupied by live objects. If you are at 100% live occupation the GCs will be very frequent and very little useful computation will occur compared to the garbage collections. Consider yourself lucky that the program managed to finish executing!
Solutions? Use a more compact data representation (you have started doing that). Use "still" objects (i.e. ##still-copy) that are more compactly represented when objects have 5 fields or more. Enable virtual memory (but then your system will slow down due to swapping). Buy more RAM. Switch from a 64 bit system to a 32 bit system (if you are using a 64 bit system).
Marc
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk.promo.yahoo.com/security/ 了解更多!
2009/9/10 peter lo peter19852001@yahoo.com.hk:
I will explain just a bit what I am doing. Basically given a number of input DNA sequences, I would like to find "similar" (in terms of edit distance) subsequences and rank them using a scoring scheme.
Oh wow. I did something very much like this many years ago when I first started using Scheme. I was working on a string-based nearest neighbor search using a variety of tree structures where my similarity metric was based on levenshtein edit distance (among others). The big problem is simply that the algorithms just don't scale very nicely, in either time or space - it's a great field for original research. I ended up needing to break the problem into several parts, writing out intermediate files with resulting data structures.
Do you have Dan Gusfield's _Algorithms on Strings, Trees and Sequences_? As far as I'm concerned it's the definitive work in the field so far. Dan actually wrote most of it concerning the field of DNA sequencing as well, so many of his examples may also be relevant. Some of the key algorithms I needed to use tend to be a bit fiddly about their implementation: e.g. when implementing a suffix tree, you can blow the algorithm up from being linear to n^2 just by choosing a bad data representation.
Sorry I am not being of more direct help, but you're right - if the code you have is anything like mine was the amount you can post online won't tell us very much. You are simply doing something that is hard. Good luck with it :)
david
On 9-Sep-09, at 10:42 PM, peter lo wrote:
By the way, I am currently trying to reduce the allocation of short- lived objects, I am sure there are general tips on the web about this, but I am not sure if there are tips more specific to Gambit Scheme, taking into account its garbage collector.
I suggest that you represent your "lists" using extensible vectors, implemented with plain vectors. The value at index 0 is really the number of active elements in the extensible vector. When the vector is full, a slightly longer (by a factor of say 20% more) is allocated and the previous content is copied to it. This representation is up to 6 times more compact than the list representation for long enough "lists", when using Gambit. That's because each element in the list requires a pair, and each pair occupies 6 words of memory, header+car +cdr=3, but they are movable objects so the same space must be reserved in the "to-space". For long enough vectors (255 elements or more) each element occupies one word because the vector is "still" (no need to reserve space in the "to-space").
Marc
On Sep 10, 2009, at 8:27 AM, Marc Feeley wrote:
On 9-Sep-09, at 10:42 PM, peter lo wrote:
By the way, I am currently trying to reduce the allocation of short- lived objects, I am sure there are general tips on the web about this, but I am not sure if there are tips more specific to Gambit Scheme, taking into account its garbage collector.
I suggest that you represent your "lists" using extensible vectors, implemented with plain vectors.
Or home-grown extensible u8vectors? Or strings (after configuring with --enable-char-size=1)? How many distinct "characters" are being distinguished?
Brad
2009/9/11 Bradley Lucier lucier@math.purdue.edu:
Or home-grown extensible u8vectors? Or strings (after configuring with --enable-char-size=1)? How many distinct "characters" are being distinguished?
Warning: *Beware of the following question, for I should already be sleeping* Supposing I have an alphabet of 2 letters (to make things simple), I can code it with two bits. Despite the possible algorithmic and computational pain somehow, since we have bignums, how about encoding a string of this alphabet into a number, using bitwise operations? Appending a char to a string is a SHIFT of two bits, then an OR. Referencing should be a matter of LOG (to get the size), and then shifting and an AND 3.
I'm still awake enough to encode just 2 letters and not three on 2 bits (for concatenating "01" to "00" would not work as expected). I guess that it relies on the representation of bignums in memory.
What would be such a misuse of bignums worth?
P!
On Fri, 2009-09-11 at 00:32 +0900, Adrien Piérard wrote:
2009/9/11 Bradley Lucier lucier@math.purdue.edu:
Or home-grown extensible u8vectors? Or strings (after configuring with --enable-char-size=1)? How many distinct "characters" are being distinguished?
Warning: *Beware of the following question, for I should already be sleeping* Supposing I have an alphabet of 2 letters (to make things simple), I can code it with two bits. Despite the possible algorithmic and computational pain somehow, since we have bignums, how about encoding a string of this alphabet into a number, using bitwise operations? Appending a char to a string is a SHIFT of two bits, then an OR. Referencing should be a matter of LOG (to get the size), and then shifting and an AND 3.
I'm still awake enough to encode just 2 letters and not three on 2 bits (for concatenating "01" to "00" would not work as expected). I guess that it relies on the representation of bignums in memory.
What would be such a misuse of bignums worth?
Bignums are not mutable at the "user" level, but they are mutable in the internal implementation of course.
At one point I wrapped integers in operations that implemented mutable sets of nonnegative integers (sets either with a finite number of elements (nonnegative exact integers) or all but a finite number of elements (negative exact integers)) as fixnum/bignum bitmaps. The ultimate extensible bit-vectors, to interpret as you wish.
The trouble is, such an implementation of sets of nonnegative integers is efficient only if the chance of a bit being set in your application is about 1/2. If you have very sparse sets, or sets whose elements tend to cluster, or any other kind of non-uniformity in your sets, other data structures are much more efficient. And if you keep adding elements on the end, it's quite inefficient (you don't increase the size by 20% at a time, you add 64 bits at a time; you'd probably want to start a set with a sentinel element if you know how big it will be eventually).
If I remember correctly, Marc didn't think they were worth inserting into the runtime. Perhaps I should resurrect them as an SRFI (but that's *so* much work) or just dump the code into the dumping grounds (if I can find it again).
Brad
On 10-Sep-09, at 11:32 AM, Adrien Piérard wrote:
2009/9/11 Bradley Lucier lucier@math.purdue.edu:
Or home-grown extensible u8vectors? Or strings (after configuring with --enable-char-size=1)? How many distinct "characters" are being distinguished?
Warning: *Beware of the following question, for I should already be sleeping* Supposing I have an alphabet of 2 letters (to make things simple), I can code it with two bits. Despite the possible algorithmic and computational pain somehow, since we have bignums, how about encoding a string of this alphabet into a number, using bitwise operations? Appending a char to a string is a SHIFT of two bits, then an OR. Referencing should be a matter of LOG (to get the size), and then shifting and an AND 3.
I'm still awake enough to encode just 2 letters and not three on 2 bits (for concatenating "01" to "00" would not work as expected). I guess that it relies on the representation of bignums in memory.
What would be such a misuse of bignums worth?
You can use 1 bit per letter if there are 2 letters. You simply need a 1 bit at the top end to indicate the length of the bit string. Then you simply use integer-length (minus one) to get the length of the bit string:
(map integer-length '(1 2 3 4 5 6 7 8 9 10 11 12))
(1 2 2 3 3 3 3 4 4 4 4 4)
Note that your representation has the same asymptotic space efficiency as a u8vector where each byte contains 8 letters. I'm pretty sure an explicit u8vector representation would be faster (don't be fooled by "appending a letter is just a shift"... the shift is going to be O(n) not O(1) because these are bignums).
Marc
P.S. Get some sleep!
Thanks for the suggestion, I will try it.
Regards, Peter
----- 郵件原件 ---- 寄件人﹕ Marc Feeley feeley@iro.umontreal.ca 收件人 peter lo peter19852001@yahoo.com.hk 副本(CC) Pavel Dudrenov dudrenov@gmail.com; gambit-list@iro.umontreal.ca 傳送日期﹕ 2009 年 9月 10 日 星期四 下午 8:27:54 主題: Re: Re: [gambit-list] Re: Regarding garbage collection
On 9-Sep-09, at 10:42 PM, peter lo wrote:
By the way, I am currently trying to reduce the allocation of short-lived objects, I am sure there are general tips on the web about this, but I am not sure if there are tips more specific to Gambit Scheme, taking into account its garbage collector.
I suggest that you represent your "lists" using extensible vectors, implemented with plain vectors. The value at index 0 is really the number of active elements in the extensible vector. When the vector is full, a slightly longer (by a factor of say 20% more) is allocated and the previous content is copied to it. This representation is up to 6 times more compact than the list representation for long enough "lists", when using Gambit. That's because each element in the list requires a pair, and each pair occupies 6 words of memory, header+car+cdr=3, but they are movable objects so the same space must be reserved in the "to-space". For long enough vectors (255 elements or more) each element occupies one word because the vector is "still" (no need to reserve space in the "to-space").
Marc
Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk.promo.yahoo.com/security/ 了解更多!
On 2009-9-9, at 22:42 , peter lo wrote:
As the code is very long and spans a number of files, maybe I only post a fragment of it. I will explain just a bit what I am doing. Basically given a number of input DNA sequences, I would like to find "similar" (in terms of edit distance) subsequences and rank them using a scoring scheme. The length of subsequences range from 5 to 20. Previously, for each subsequence length, I go through all the subsequences, find their similar occurrences, and rank them. But this is too slow, so I use a heuristic, which is to record the similar occurrences of each subsequence of length L, then using them to refine the search for subsequences of length L+1. This is the part that is causing trouble. As the total length of input sequences increase and L increase, the number of unique subsequences increase very rapidly, and since each of them keeps a list of occurrences, the amount of live objects when going from length L to length L+1 can get very large.
If your string lengths do not exceed 20 and your alphabet has 4 characters, you could mangle every chain into a single integer in the range 0...2^40 or so. These values fit easily in a 64 bit integer, and distance can be calculated by xoring 2 numbers (well, only if the weights are the same).
For instance assuming letters {0 1 2 3}, the sequence "01320" maps to {10 13 20} = 0x478 if you use a stop bit as Marc remarked. Or it could be {00 13 20 05} = 0x785 if you use a length byte a la Pascal strings.
Moreover, you could pre-calculate a NxN edit distance. A 1 meg table would handle 4x4, 5x5 needs 16 megs.