Peter, It be interesting to see your program, or at least the part cranks up the mem usage so much. Maybe some of the people on the list would be able to give you a hint on how better to represent/manipulate yourd data.
This thread has been going on for some time now and I feel that all the general, or theoretical if you will, answers have been given. It'll probably be time to show some relevant code if you continue having trouble in the future.
Pavel
On Wed, Sep 9, 2009 at 6:32 AM, Marc Feeleyfeeley@iro.umontreal.ca wrote:
On 9-Sep-09, at 8:51 AM, peter lo wrote:
Dear all, Thanks for the replies and clarifications. After some investigations, I believe that the cause of heap overflow is not memory leakage. The whole program is in Scheme, so it cannot be leakage due to external C libraries. And I have checked the program, it seems that there are no "really useless" data hanging around that cannot be garbage collected. The real reason is simply that the input data size is too large, so there are too many intermediate results, therefore needing > 1.6 GB of live memory.
It was my fault, as I do not expect the program to need that much memory, so the representation of the intermediate result is not particularly compact. Previously I used a structure with 6 members, but in fact I really need 2 of them to do the computations. After I have changed the representation to use a simple cons cell instead, the program seem to manage to continue to run using only a peak amount of 1700MB ram.
Another thing that I have noticed is that the system holds at most ~1700MB ram for heap, even though 100% of them are live objects, is this by design or just an accident in Windows XP? When the precentage of live objects gets closer to 100%, the GC's become more frequent as each time there is little memory reclaimed, and each one takes around 2 seconds, because a lot of memory objects are examined to determine the reachability, therefore the system is doing less useful work. Now I am trying to reduce the allocation of short-lived data, so that there will be less GC's. I am also considering switching to a linux system.
By default Gambit's memory management system resizes the heap so that after a GC there is 50% of the heap that is occupied by live objects (this can be changed with the -:lXXX runtime option). If the resizing requires that the heap grow, then the runtime will allocate new "movable sections" (which are 512 Kbytes each) by calling the C "malloc" function. If malloc returns NULL, indicating that no more memory is available to the process, then the system will keep on running, but with more that 50% of the heap occupied by live objects. If you are at 100% live occupation the GCs will be very frequent and very little useful computation will occur compared to the garbage collections. Consider yourself lucky that the program managed to finish executing!
Solutions? Use a more compact data representation (you have started doing that). Use "still" objects (i.e. ##still-copy) that are more compactly represented when objects have 5 fields or more. Enable virtual memory (but then your system will slow down due to swapping). Buy more RAM. Switch from a 64 bit system to a 32 bit system (if you are using a 64 bit system).
Marc
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list