Marc:
I was looking at the mbrot benchmark with unsafe fixnum and flonum operations; the main loop is
Loop 1:
(define (count r i step x y)
(let ((max-count 64) (radius^2 16.0))
(let ((cr (fl+ r (fl* (exact->inexact x) step))) (ci (fl+ i (fl* (exact->inexact y) step))))
(let loop ((zr cr) (zi ci) (c 0)) (if (fx= c max-count) c (let ((zr^2 (fl* zr zr)) (zi^2 (fl* zi zi))) (if (fl> (fl+ zr^2 zi^2) radius^2) c (let ((new-zr (fl+ (fl- zr^2 zi^2) cr)) (new-zi (fl+ (fl* 2.0 (fl* zr zi)) ci))) (loop new-zr new-zi (fx+ c 1))))))))))
Recomputing floating-point results is often faster than boxing them, so I applied this to zr^2 and zi^2, to get
Loop 2:
(define (count r i step x y)
(let ((max-count 64) (radius^2 16.0))
(let ((cr (fl+ r (fl* (exact->inexact x) step))) (ci (fl+ i (fl* (exact->inexact y) step))))
(let loop ((zr cr) (zi ci) (c 0)) (if (fx= c max-count) c (if (fl> (fl+ (fl* zr zr) (fl* zi zi)) radius^2) c (let ((new-zr (fl+ (fl- (fl* zr zr) (fl* zi zi)) cr)) (new-zi (fl+ (fl* 2.0 (fl* zr zi)) ci))) (loop new-zr new-zi (fx+ c 1)))))))))
Results:
Default heap:
Loop 1:
./mbrot
code size = -289 (time (run-bench name count ok? run)) 1792 ms real time 1791 ms cpu time (1770 user, 21 system) 12549 collections accounting for 1499 ms real time (1487 user, 10 system) 2782878336 bytes allocated 44 minor faults no major faults
Loop 2:
./mbrot
code size = -289 (time (run-bench name count ok? run)) 794 ms real time 793 ms cpu time (785 user, 8 system) 5166 collections accounting for 619 ms real time (614 user, 4 system) 1147411712 bytes allocated 44 minor faults no major faults
But increasing the heap from whatever the default is to 1MB (roughly) made an even bigger difference on my machine (2.4GHz Intel Core 2 Duo, Mac OS X 10.6.8):
Loop 1:
./mbrot -:m1000
code size = -289 (time (run-bench name count ok? run)) 536 ms real time 535 ms cpu time (530 user, 6 system) 2440 collections accounting for 296 ms real time (293 user, 2 system) 2782736064 bytes allocated 343 minor faults no major faults
Loop 2:
./mbrot -:m1000
code size = -289 (time (run-bench name count ok? run)) 289 ms real time 289 ms cpu time (286 user, 3 system) 1000 collections accounting for 121 ms real time (120 user, 1 system) 1147581952 bytes allocated 343 minor faults no major faults
Surely 1MB can be OK as a default heap size nowadays ;-)!
This is with
gsc -v v4.6.6 20120915144211 i386-apple-darwin10.8.0 "./configure 'CC=/pkgs/gcc-4.7.2/bin/gcc -march=core2 -fschedule-insns -frename-registers' '--enable-single-host' '--enable-multiple-versions'"
Brad
Afficher les réponses par date
On 2012-12-18, at 8:45 AM, Bradley Lucier lucier@math.purdue.edu wrote:
Surely 1MB can be OK as a default heap size nowadays ;-)!
Why not 2, 5 or 10MB? A default heap size of 5 MB makes some sense given that it is roughly the size of the executable code of the library.
BTW, if you want a different default heap size, why don't you use :
export GAMBCOPT=m1000
in your .profile?
Marc
On Jan 3, 2013, at 2:38 PM, Marc Feeley wrote:
On 2012-12-18, at 8:45 AM, Bradley Lucier lucier@math.purdue.edu wrote:
Surely 1MB can be OK as a default heap size nowadays ;-)!
Why not 2, 5 or 10MB?
What's your point?
A default heap size of 5 MB makes some sense given that it is roughly the size of the executable code of the library.
I think a default heap size should be the size of the largest cache (or half the size of the largest cache, given that the garbage collector is stop-and-copy (I think)) in commonly-used processors. That way most of the data working set would be in cache.
My guess is that only a relatively small part of the Gambit runtime code is active at any given time in most programs (especially with the conditional specialization of most standard operators).
BTW, if you want a different default heap size, why don't you use :
export GAMBCOPT=m1000
in your .profile?
I know how to set the cachet; what I'm concerned about is people we'd like to attract to Gambit who try it out first with some trivial code that performs poorly because Gambit is spending 2/3 of its time in GC. Newbies don't understand about boxing flonums, or computations that give ratnum results, etc.
Increasing the default heap size from basically zero would help with this problem.
On my mac mini, with a 3MB L2 cache, 2.4GHz Core 2 Duo, I get the following times for mbrot with the following minimum heap sizes (in K).
Somewhere between 500K and 1500K seems to be the sweet spot for minimum cache size.
BTW, the number of GCs does not seem to vary smoothly, it seems to jump at discrete points in the minimum heap size. That goes against my intuition.
Brad
0: 1831 ms cpu time (1805 user, 27 system) 12549 collections accounting for 1690 ms real time (1543 user, 15 system) 100: 1822 ms cpu time (1798 user, 24 system) 12549 collections accounting for 1649 ms real time (1534 user, 13 system) 200: 1833 ms cpu time (1805 user, 28 system) 12549 collections accounting for 1677 ms real time (1540 user, 16 system) 300: 1824 ms cpu time (1798 user, 25 system) 12549 collections accounting for 1613 ms real time (1534 user, 14 system) 400: 1831 ms cpu time (1803 user, 28 system) 12547 collections accounting for 1698 ms real time (1539 user, 16 system) 500: 602 ms cpu time (594 user, 8 system) 2440 collections accounting for 338 ms real time (318 user, 4 system) 600: 609 ms cpu time (601 user, 8 system) 2440 collections accounting for 335 ms real time (320 user, 4 system) 700: 611 ms cpu time (601 user, 10 system) 2440 collections accounting for 361 ms real time (321 user, 5 system) 800: 603 ms cpu time (595 user, 9 system) 2440 collections accounting for 340 ms real time (318 user, 4 system) 900: 607 ms cpu time (600 user, 7 system) 2440 collections accounting for 336 ms real time (320 user, 3 system) 1000: 604 ms cpu time (595 user, 8 system) 2440 collections accounting for 358 ms real time (318 user, 4 system) 1100: 599 ms cpu time (591 user, 8 system) 2440 collections accounting for 323 ms real time (316 user, 4 system) 1200: 610 ms cpu time (601 user, 9 system) 2440 collections accounting for 342 ms real time (321 user, 4 system) 1300: 628 ms cpu time (618 user, 10 system) 1350 collections accounting for 258 ms real time (232 user, 3 system) 1400: 637 ms cpu time (629 user, 8 system) 1350 collections accounting for 266 ms real time (242 user, 3 system) 1500: 635 ms cpu time (623 user, 11 system) 1350 collections accounting for 289 ms real time (237 user, 4 system) 1600: 625 ms cpu time (617 user, 8 system) 1350 collections accounting for 250 ms real time (232 user, 3 system) 1700: 635 ms cpu time (623 user, 11 system) 1350 collections accounting for 282 ms real time (237 user, 4 system) 1800: 631 ms cpu time (625 user, 7 system) 1350 collections accounting for 245 ms real time (236 user, 2 system) 1900: 632 ms cpu time (625 user, 7 system) 1350 collections accounting for 251 ms real time (236 user, 2 system) 2000: 639 ms cpu time (629 user, 10 system) 1350 collections accounting for 260 ms real time (238 user, 4 system) 100: 1826 ms cpu time (1800 user, 25 system) 12549 collections accounting for 1677 ms real time (1537 user, 14 system) 2200: 805 ms cpu time (794 user, 11 system) 935 collections accounting for 273 ms real time (248 user, 3 system) 2300: 819 ms cpu time (811 user, 8 system) 935 collections accounting for 261 ms real time (252 user, 2 system) 2400: 817 ms cpu time (809 user, 8 system) 935 collections accounting for 259 ms real time (254 user, 2 system) 2500: 814 ms cpu time (806 user, 9 system) 935 collections accounting for 275 ms real time (251 user, 3 system) 2600: 821 ms cpu time (807 user, 15 system) 935 collections accounting for 287 ms real time (254 user, 5 system) 2700: 818 ms cpu time (805 user, 13 system) 935 collections accounting for 284 ms real time (251 user, 4 system) 2800: 809 ms cpu time (797 user, 12 system) 935 collections accounting for 304 ms real time (253 user, 3 system) 2900: 817 ms cpu time (805 user, 11 system) 935 collections accounting for 262 ms real time (252 user, 4 system) 3000: 813 ms cpu time (798 user, 14 system) 935 collections accounting for 272 ms real time (250 user, 4 system) 3100: 862 ms cpu time (847 user, 14 system) 714 collections accounting for 298 ms real time (252 user, 4 system) 3200: 872 ms cpu time (859 user, 13 system) 714 collections accounting for 284 ms real time (257 user, 4 system) 3300: 862 ms cpu time (848 user, 14 system) 714 collections accounting for 288 ms real time (253 user, 4 system) 3400: 865 ms cpu time (853 user, 12 system) 714 collections accounting for 277 ms real time (255 user, 3 system) 3500: 855 ms cpu time (842 user, 13 system) 714 collections accounting for 285 ms real time (252 user, 4 system) 3600: 864 ms cpu time (852 user, 12 system) 714 collections accounting for 291 ms real time (255 user, 4 system) 3700: 879 ms cpu time (866 user, 12 system) 714 collections accounting for 279 ms real time (260 user, 4 system) 3800: 870 ms cpu time (859 user, 11 system) 714 collections accounting for 280 ms real time (258 user, 3 system) 3900: 858 ms cpu time (847 user, 11 system) 714 collections accounting for 284 ms real time (253 user, 3 system) 4000: 851 ms cpu time (835 user, 16 system) 578 collections accounting for 285 ms real time (243 user, 4 system) 4100: 868 ms cpu time (854 user, 13 system) 578 collections accounting for 263 ms real time (247 user, 3 system) 4200: 860 ms cpu time (847 user, 13 system) 578 collections accounting for 268 ms real time (244 user, 4 system) 4300: 868 ms cpu time (853 user, 14 system) 578 collections accounting for 283 ms real time (249 user, 4 system) 4400: 870 ms cpu time (857 user, 12 system) 578 collections accounting for 261 ms real time (249 user, 4 system) 4500: 854 ms cpu time (839 user, 15 system) 578 collections accounting for 285 ms real time (242 user, 4 system) 4600: 860 ms cpu time (846 user, 13 system) 578 collections accounting for 272 ms real time (245 user, 4 system) 4700: 865 ms cpu time (854 user, 11 system) 578 collections accounting for 270 ms real time (248 user, 3 system) 4800: 859 ms cpu time (848 user, 11 system) 578 collections accounting for 266 ms real time (246 user, 3 system) 4900: 858 ms cpu time (844 user, 14 system) 485 collections accounting for 243 ms real time (232 user, 4 system) 5000: 849 ms cpu time (834 user, 14 system) 485 collections accounting for 311 ms real time (229 user, 4 system) 5100: 861 ms cpu time (847 user, 14 system) 485 collections accounting for 247 ms real time (233 user, 4 system) 5200: 838 ms cpu time (822 user, 16 system) 485 collections accounting for 265 ms real time (226 user, 4 system) 5300: 852 ms cpu time (838 user, 14 system) 485 collections accounting for 251 ms real time (230 user, 3 system) 5400: 848 ms cpu time (837 user, 11 system) 485 collections accounting for 259 ms real time (230 user, 3 system) 5500: 850 ms cpu time (832 user, 17 system) 485 collections accounting for 268 ms real time (231 user, 5 system) 5600: 843 ms cpu time (829 user, 13 system) 485 collections accounting for 247 ms real time (228 user, 4 system) 5700: 851 ms cpu time (839 user, 13 system) 485 collections accounting for 256 ms real time (232 user, 3 system) 5800: 841 ms cpu time (827 user, 14 system) 418 collections accounting for 231 ms real time (211 user, 3 system) 5900: 831 ms cpu time (816 user, 15 system) 418 collections accounting for 222 ms real time (209 user, 3 system) 6000: 850 ms cpu time (837 user, 13 system) 418 collections accounting for 221 ms real time (215 user, 3 system) 6100: 839 ms cpu time (826 user, 13 system) 418 collections accounting for 222 ms real time (211 user, 3 system) 6200: 824 ms cpu time (806 user, 18 system) 418 collections accounting for 238 ms real time (206 user, 5 system) 6300: 834 ms cpu time (819 user, 15 system) 418 collections accounting for 223 ms real time (210 user, 3 system) 6400: 833 ms cpu time (816 user, 17 system) 418 collections accounting for 238 ms real time (211 user, 4 system) 6500: 827 ms cpu time (808 user, 18 system) 418 collections accounting for 248 ms real time (206 user, 4 system) 6600: 834 ms cpu time (817 user, 17 system) 418 collections accounting for 231 ms real time (208 user, 4 system) 6700: 830 ms cpu time (816 user, 14 system) 367 collections accounting for 215 ms real time (192 user, 3 system) 6800: 816 ms cpu time (801 user, 15 system) 367 collections accounting for 227 ms real time (189 user, 3 system) 6900: 826 ms cpu time (810 user, 15 system) 367 collections accounting for 212 ms real time (192 user, 3 system) 7000: 826 ms cpu time (810 user, 16 system) 367 collections accounting for 235 ms real time (190 user, 3 system) 7100: 822 ms cpu time (805 user, 17 system) 367 collections accounting for 218 ms real time (190 user, 4 system) 7200: 816 ms cpu time (803 user, 13 system) 367 collections accounting for 227 ms real time (189 user, 3 system) 7300: 833 ms cpu time (820 user, 12 system) 367 collections accounting for 201 ms real time (191 user, 2 system) 7400: 820 ms cpu time (802 user, 18 system) 367 collections accounting for 218 ms real time (188 user, 4 system) 7500: 831 ms cpu time (812 user, 19 system) 367 collections accounting for 208 ms real time (192 user, 4 system) 7600: 809 ms cpu time (790 user, 19 system) 328 collections accounting for 199 ms real time (170 user, 4 system) 7700: 811 ms cpu time (796 user, 14 system) 328 collections accounting for 198 ms real time (172 user, 3 system) 7800: 816 ms cpu time (802 user, 15 system) 328 collections accounting for 193 ms real time (173 user, 3 system) 7900: 811 ms cpu time (795 user, 16 system) 328 collections accounting for 193 ms real time (170 user, 3 system) 8000: 818 ms cpu time (803 user, 15 system) 328 collections accounting for 180 ms real time (171 user, 3 system) 8100: 825 ms cpu time (812 user, 14 system) 328 collections accounting for 178 ms real time (173 user, 2 system) 8200: 813 ms cpu time (799 user, 14 system) 328 collections accounting for 180 ms real time (171 user, 3 system) 8300: 828 ms cpu time (813 user, 15 system) 328 collections accounting for 189 ms real time (174 user, 3 system) 8400: 821 ms cpu time (805 user, 16 system) 328 collections accounting for 183 ms real time (174 user, 3 system) 8500: 813 ms cpu time (797 user, 16 system) 296 collections accounting for 180 ms real time (157 user, 3 system) 8600: 823 ms cpu time (809 user, 15 system) 296 collections accounting for 169 ms real time (159 user, 2 system) 8700: 807 ms cpu time (790 user, 17 system) 296 collections accounting for 164 ms real time (155 user, 3 system) 8800: 814 ms cpu time (797 user, 17 system) 296 collections accounting for 163 ms real time (157 user, 3 system) 8900: 816 ms cpu time (801 user, 16 system) 296 collections accounting for 164 ms real time (157 user, 2 system) 9000: 802 ms cpu time (786 user, 16 system) 296 collections accounting for 170 ms real time (155 user, 2 system) 9100: 811 ms cpu time (794 user, 17 system) 296 collections accounting for 164 ms real time (158 user, 3 system) 9200: 810 ms cpu time (797 user, 14 system) 296 collections accounting for 174 ms real time (157 user, 2 system) 9300: 817 ms cpu time (800 user, 17 system) 296 collections accounting for 172 ms real time (158 user, 3 system) 9400: 798 ms cpu time (780 user, 18 system) 269 collections accounting for 164 ms real time (143 user, 3 system) 9500: 814 ms cpu time (796 user, 17 system) 269 collections accounting for 165 ms real time (146 user, 3 system) 9600: 801 ms cpu time (782 user, 19 system) 269 collections accounting for 165 ms real time (141 user, 3 system) 9700: 802 ms cpu time (783 user, 19 system) 269 collections accounting for 158 ms real time (143 user, 3 system) 9800: 791 ms cpu time (771 user, 20 system) 269 collections accounting for 152 ms real time (138 user, 3 system) 9900: 800 ms cpu time (779 user, 21 system) 269 collections accounting for 155 ms real time (141 user, 3 system) 10000: 794 ms cpu time (774 user, 19 system) 269 collections accounting for 156 ms real time (142 user, 3 system)
On 2013-01-03, at 5:43 PM, Bradley Lucier lucier@math.purdue.edu wrote:
On Jan 3, 2013, at 2:38 PM, Marc Feeley wrote:
On 2012-12-18, at 8:45 AM, Bradley Lucier lucier@math.purdue.edu wrote:
Surely 1MB can be OK as a default heap size nowadays ;-)!
Why not 2, 5 or 10MB?
What's your point?
My point is simply that there doesn't seem to be a rule for choosing the best default heap size. Some programs (such as mbrot) seem to perform best on your machine with a 1 MB heap while other programs will do better with 10 MB. Moreover, it also depends on the details of the processor. For example on my Mac (2.6 GHz i7 with 6 MB L3 cache and 0.25 MB L2 cache) I get these numbers for your modified mbrot:
510 ms for heap size <= 0.4 MB 195 ms for 0.5 MB <= heap size <= 1.3 MB 161 ms for 1.4 MB <= heap size <= 3.0 MB 140 ms for 3.1 MB <= heap size
The lowest run time (136 ms) is at heap size = 20 MB .
What setting should we optimize for?
I agree however that the current setting is too low. But given that I will change the default, I would prefer to only change it once. What is a "reasonable" setting? The experiment on my computer suggests "the bigger the better".
A default heap size of 5 MB makes some sense given that it is roughly the size of the executable code of the library.
I think a default heap size should be the size of the largest cache (or half the size of the largest cache, given that the garbage collector is stop-and-copy (I think)) in commonly-used processors. That way most of the data working set would be in cache.
My guess is that only a relatively small part of the Gambit runtime code is active at any given time in most programs (especially with the conditional specialization of most standard operators).
BTW, if you want a different default heap size, why don't you use :
export GAMBCOPT=m1000
in your .profile?
I know how to set the cachet; what I'm concerned about is people we'd like to attract to Gambit who try it out first with some trivial code that performs poorly because Gambit is spending 2/3 of its time in GC. Newbies don't understand about boxing flonums, or computations that give ratnum results, etc.
Increasing the default heap size from basically zero would help with this problem.
On my mac mini, with a 3MB L2 cache, 2.4GHz Core 2 Duo, I get the following times for mbrot with the following minimum heap sizes (in K).
Somewhere between 500K and 1500K seems to be the sweet spot for minimum cache size.
BTW, the number of GCs does not seem to vary smoothly, it seems to jump at discrete points in the minimum heap size. That goes against my intuition.
That's because the (stop-and-copy) heap is composed of a set of msections which are all the same size (0.5 MB if I recall correctly). If you ask for 0.6 MB, you will get 1 MB (2 msections).
Marc
On Jan 3, 2013, at 8:16 PM, Marc Feeley wrote:
What's your point?
My point is simply that there doesn't seem to be a rule for choosing the best default heap size.
We're not choosing the best default heap size, we're choosing the best default minimum heap size.
Some programs (such as mbrot) seem to perform best on your machine with a 1 MB heap while other programs will do better with 10 MB. Moreover, it also depends on the details of the processor. For example on my Mac (2.6 GHz i7 with 6 MB L3 cache and 0.25 MB L2 cache) I get these numbers for your modified mbrot:
I was using the original mbrot, which eats up memory at almost twice the rate of the modified one.
510 ms for heap size <= 0.4 MB 195 ms for 0.5 MB <= heap size <= 1.3 MB 161 ms for 1.4 MB <= heap size <= 3.0 MB 140 ms for 3.1 MB <= heap size
The lowest run time (136 ms) is at heap size = 20 MB .
What setting should we optimize for?
I agree however that the current setting is too low. But given that I will change the default, I would prefer to only change it once. What is a "reasonable" setting? The experiment on my computer suggests "the bigger the better".
You also have a faster memory system, which perhaps makes the cache sizes less important.
It's an interesting discussion to have. But making the minimum heap size 1MB would give most of the speed benefits you see on your machine, and may help on machines with crappier memory systems.
Anyone want to do some experiments on iOS or Android?
Brad
On 2013-01-03, at 8:32 PM, Bradley Lucier lucier@math.purdue.edu wrote:
On Jan 3, 2013, at 8:16 PM, Marc Feeley wrote:
What's your point?
My point is simply that there doesn't seem to be a rule for choosing the best default heap size.
We're not choosing the best default heap size, we're choosing the best default minimum heap size.
Some programs (such as mbrot) seem to perform best on your machine with a 1 MB heap while other programs will do better with 10 MB. Moreover, it also depends on the details of the processor. For example on my Mac (2.6 GHz i7 with 6 MB L3 cache and 0.25 MB L2 cache) I get these numbers for your modified mbrot:
I was using the original mbrot, which eats up memory at almost twice the rate of the modified one.
510 ms for heap size <= 0.4 MB 195 ms for 0.5 MB <= heap size <= 1.3 MB 161 ms for 1.4 MB <= heap size <= 3.0 MB 140 ms for 3.1 MB <= heap size
The lowest run time (136 ms) is at heap size = 20 MB .
What setting should we optimize for?
I agree however that the current setting is too low. But given that I will change the default, I would prefer to only change it once. What is a "reasonable" setting? The experiment on my computer suggests "the bigger the better".
You also have a faster memory system, which perhaps makes the cache sizes less important.
It's an interesting discussion to have. But making the minimum heap size 1MB would give most of the speed benefits you see on your machine, and may help on machines with crappier memory systems.
Anyone want to do some experiments on iOS or Android?
Brad
Another interesting question: is there an API on the main OSes to get the cache sizes? We could make the default minimum heap size dependent on the cache size.
Marc
Another interesting question: is there an API on the main OSes to get the cache sizes? We could make the default minimum heap size dependent on the cache size.
For windows, it seems there's an API: http://stackoverflow.com/questions/150294/how-to-programmatically-get-the-cp...
For linux, it seems that libproccpuinfo would allow you to read /sys/devices/system/cpu/cpu0/cache/index2/size properly, and thus get the information as well.
I currently don't have a FreeBSD machine to check for this OS what to do, but I assume it's accessible with the `sysctl` tool, and that an API exists for it as well.
P!
On Jan 3, 2013, at 8:52 PM, Adrien Piérard wrote:
Another interesting question: is there an API on the main OSes to get the cache sizes? We could make the default minimum heap size dependent on the cache size.
For windows, it seems there's an API: http://stackoverflow.com/questions/150294/how-to-programmatically-get-the-cp...
For linux, it seems that libproccpuinfo would allow you to read /sys/devices/system/cpu/cpu0/cache/index2/size properly, and thus get the information as well.
I currently don't have a FreeBSD machine to check for this OS what to do, but I assume it's accessible with the `sysctl` tool, and that an API exists for it as well.
Cool! Here's what I get in Mac OS X 10.6.8:
[Media-Mac-mini-3:~/programs] lucier% sysctl -a | grep cache hw.cachelinesize = 64 hw.l1icachesize = 32768 hw.l1dcachesize = 32768 hw.l2cachesize = 3145728 kern.flush_cache_on_write: 0 vfs.generic.nfs.client.access_cache_timeout: 60 vfs.generic.nfs.server.reqcache_size: 64 net.inet.ip.rtmaxcache: 128 net.inet6.ip6.rtmaxcache: 128 hw.cacheconfig: 2 1 2 0 0 0 0 0 0 0 hw.cachesize: 8321499136 32768 3145728 0 0 0 0 0 0 0 hw.cachelinesize: 64 hw.l1icachesize: 32768 hw.l1dcachesize: 32768 hw.l2cachesize: 3145728 machdep.cpu.cache.linesize: 64 machdep.cpu.cache.L2_associativity: 6 machdep.cpu.cache.size: 3072
Brad
I believe at least expert users may like to check the memory consumption statistics for programs, and a too big size of a newly started Gambit might give the impression that Gambit is memory-costly or something.
Therefore, if the minimum heap size will be a fixed number:
I'd guess noone will find Gambit memory-heavy as long as the binary + C stack + heap of a newly launched Gambit takes less than 10MB.
To make that happen, I suppose a fixed default minimum heap setting would be no more than ~2.5MB (binary ~3.8MB + 2.5MBx2=5MB heap = ~8.8MB, completely fine).
Brgds
2013/1/4 Marc Feeley feeley@iro.umontreal.ca ..
I agree however that the current setting is too low. But given that I will change the default, I would prefer to only change it once. What is a "reasonable" setting? The experiment on my computer suggests "the bigger the better".
..
On 2013-01-04, at 9:00 AM, Mikael mikael.rcv@gmail.com wrote:
I believe at least expert users may like to check the memory consumption statistics for programs, and a too big size of a newly started Gambit might give the impression that Gambit is memory-costly or something.
True.
Therefore, if the minimum heap size will be a fixed number:
I'd guess noone will find Gambit memory-heavy as long as the binary + C stack + heap of a newly launched Gambit takes less than 10MB.
To make that happen, I suppose a fixed default minimum heap setting would be no more than ~2.5MB (binary ~3.8MB + 2.5MBx2=5MB heap = ~8.8MB, completely fine).
I'm not sure why you multiply the heap size by 2. The (min and max) heap sizes that are specified in the runtime options account for all of the heap (including both the fromspace and tospace of the copying collector, and the space for the still objects).
In any case, I have just pushed changes to the runtime which take the (highest level) cache size into account. The default minimum heap size will be 1/2 of the cache size, or 1 MB, whichever is larger.
The code to get the cache size is in lib/os.c and it has been tested on Mac OS X and linux. It seems quite complex to get this information on Windows so I haven't implemented it. If someone would like to contribute that code, please send me a patch.
Now, on my Mac which has a 6 MB cache, mbrot runs 3.5 times faster than before (with the default runtime options, which gives a 3 MB heap). The resident memory size of the process is now 6.75 MB instead of 3.5 MB.
Marc
Marc:
I built the latest git sources, and ran bench with it and with September sources, and there was no difference in the number of GCs or the runtime.
Are things working right with the new minimum heap size? Was it pushed correctly?
Brad
BTW, this happened with Mac OS X 10.6.8 and Ubuntu 12.10.
Brad
On Jan 5, 2013, at 7:42 PM, Bradley Lucier wrote:
Marc:
I built the latest git sources, and ran bench with it and with September sources, and there was no difference in the number of GCs or the runtime.
Are things working right with the new minimum heap size? Was it pushed correctly?
Brad
On 2013-01-05, at 7:42 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
I built the latest git sources, and ran bench with it and with September sources, and there was no difference in the number of GCs or the runtime.
Are things working right with the new minimum heap size? Was it pushed correctly?
Yes, according to: https://github.com/feeley/gambit/commits/master
Are you sure you are pulling from the github repo?
Marc
I followed the instructions at
https://github.com/feeley/gambit
On Jan 5, 2013, at 10:28 PM, Marc Feeley wrote:
On 2013-01-05, at 7:42 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
I built the latest git sources, and ran bench with it and with September sources, and there was no difference in the number of GCs or the runtime.
Are things working right with the new minimum heap size? Was it pushed correctly?
Yes, according to: https://github.com/feeley/gambit/commits/master
Are you sure you are pulling from the github repo?
Marc
On 2013-01-05, at 11:15 PM, Bradley Lucier lucier@math.purdue.edu wrote:
I followed the instructions at
The quick-install instructions work for me. Which instructions did not work for you (perhaps the detailed instructions in INSTALL.txt)?
Marc
Sorry, the bench script sets the minimum heap to 10,000K, I didn't notice that.
Brad
On Jan 5, 2013, at 10:28 PM, Marc Feeley wrote:
On 2013-01-05, at 7:42 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
I built the latest git sources, and ran bench with it and with September sources, and there was no difference in the number of GCs or the runtime.
Are things working right with the new minimum heap size? Was it pushed correctly?
Yes, according to: https://github.com/feeley/gambit/commits/master
Are you sure you are pulling from the github repo?
Marc
You know, we should compare the performance with 10,000K as set in the bench script and the new default minimum. I might try that tomorrow.
Brad
On Jan 5, 2013, at 11:46 PM, Bradley Lucier wrote:
Sorry, the bench script sets the minimum heap to 10,000K, I didn't notice that.
Brad
On Jan 5, 2013, at 10:28 PM, Marc Feeley wrote:
On 2013-01-05, at 7:42 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
I built the latest git sources, and ran bench with it and with September sources, and there was no difference in the number of GCs or the runtime.
Are things working right with the new minimum heap size? Was it pushed correctly?
Yes, according to: https://github.com/feeley/gambit/commits/master
Are you sure you are pulling from the github repo?
Marc
On 2013-01-06, at 12:16 AM, Bradley Lucier lucier@math.purdue.edu wrote:
You know, we should compare the performance with 10,000K as set in the bench script and the new default minimum.
... and the old default minimum (which you can get with -:m1).
I might try that tomorrow.
Sounds good.
Marc
Marc:
Here are the results on my Mac Mini: Intel Core 2 Duo, 2.4 GHz, 3 MB L2 Cache, 8 GB RAM, 1.07 GHz bus speed, Mac OS X 10.6.8,
[Media-Mac-mini-3:gambit-2/gambit/bench] lucier% gsi -v v4.6.6 20130104183242 i386-apple-darwin10.8.0 "./configure 'CC=/pkgs/gcc-4.7.2/bin/gcc -march=core2 -fschedule-insns -frename-registers' '--enable-single-host' '--enable-multiple-versions'"
My observations:
I dismissed any differences that were < 20%.
Under that criteria, the new default did not lead to the best performance for dynamic, early, paraffins and nboyer, for which the old benchmarks setting of 10,000K minimum heap won.
The new default heap size beat the old benchmark setting of 10,000K minimum heap for cpstak, diviter, divrec, mbrot, sumfp, and string.
For no programs did the old default heap size beat the other settings.
Perhaps you could characterize the programs that have each property. My guess is that the new default heap size beats -:m10000 for programs that allocate a lot of memory but have few objects survive each GC, while -:m10000 wins for programs that allocate a lot of memory and have a lot of objects live at the end of each GC.
Brad
On 2013-01-06, at 1:17 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
Here are the results on my Mac Mini: Intel Core 2 Duo, 2.4 GHz, 3 MB L2 Cache, 8 GB RAM, 1.07 GHz bus speed, Mac OS X 10.6.8,
[Media-Mac-mini-3:gambit-2/gambit/bench] lucier% gsi -v v4.6.6 20130104183242 i386-apple-darwin10.8.0 "./configure 'CC=/pkgs/gcc-4.7.2/bin/gcc -march=core2 -fschedule-insns -frename-registers' '--enable-single-host' '--enable-multiple-versions'"
My observations:
I dismissed any differences that were < 20%.
Under that criteria, the new default did not lead to the best performance for dynamic, early, paraffins and nboyer, for which the old benchmarks setting of 10,000K minimum heap won.
The new default heap size beat the old benchmark setting of 10,000K minimum heap for cpstak, diviter, divrec, mbrot, sumfp, and string.
For no programs did the old default heap size beat the other settings.
Very nice.
Perhaps you could characterize the programs that have each property. My guess is that the new default heap size beats -:m10000 for programs that allocate a lot of memory but have few objects survive each GC, while -:m10000 wins for programs that allocate a lot of memory and have a lot of objects live at the end of each GC.
That's also how I interpret the results.
Marc
On Jan 6, 2013, at 5:41 PM, Marc Feeley wrote:
On 2013-01-06, at 1:17 PM, Bradley Lucier lucier@math.purdue.edu wrote:
I dismissed any differences that were < 20%.
Under that criteria, the new default did not lead to the best performance for dynamic, early, paraffins and nboyer, for which the old benchmarks setting of 10,000K minimum heap won.
The new default heap size beat the old benchmark setting of 10,000K minimum heap for cpstak, diviter, divrec, mbrot, sumfp, and string.
For no programs did the old default heap size beat the other settings.
Very nice.
Things are more complicated with this CPU:
model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
which has a 256KB level 2 cache, and an 8MB level 3 cache.
So the new default cache size is 4MB. I also tested with -m1 to get the old default.
The old default beat both the new default and 10000KB heap for trav2, pnpoly, tak, ack, and tail.
The old default beat the new default for array1.
And 10000KB heap size beat the new default with cpstak, ctak, divrec, trav1, dynamic, and nboyer
The new default beat 10000KB heap size for mbrot, early, fibc, mazefun, and simplex.
Many times either or both of the new default and 10MB heap beat the old default.
Maybe we've reached the end of usefulness of tweaking heap size parameters.
Brad
On 2013-01-07, at 9:49 PM, Bradley Lucier lucier@math.purdue.edu wrote:
On Jan 6, 2013, at 5:41 PM, Marc Feeley wrote:
On 2013-01-06, at 1:17 PM, Bradley Lucier lucier@math.purdue.edu wrote:
I dismissed any differences that were < 20%.
Under that criteria, the new default did not lead to the best performance for dynamic, early, paraffins and nboyer, for which the old benchmarks setting of 10,000K minimum heap won.
The new default heap size beat the old benchmark setting of 10,000K minimum heap for cpstak, diviter, divrec, mbrot, sumfp, and string.
For no programs did the old default heap size beat the other settings.
Very nice.
Things are more complicated with this CPU:
model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
which has a 256KB level 2 cache, and an 8MB level 3 cache.
So the new default cache size is 4MB. I also tested with -m1 to get the old default.
You have to be careful with that intel processor type. On some versions of linux the system call that returns the size of the cache returns the wrong value. You might want to double check by explicitly calling (##processor-cache-size 0 0). If it returns 0 as I suspect, the minimal default heap size of 1 MB will be used, not 4 MB.
Marc
On 2013-01-07, at 11:18 PM, Marc Feeley feeley@iro.umontreal.ca wrote:
You have to be careful with that intel processor type. On some versions of linux the system call that returns the size of the cache returns the wrong value. You might want to double check by explicitly calling (##processor-cache-size 0 0). If it returns 0 as I suspect, the minimal default heap size of 1 MB will be used, not 4 MB.
Sorry, I meant: (##processor-cache-size #f 0)
This returns the largest data cache size.
Marc