[gambit-list] Performance of different versions of gcc

Bradley Lucier lucier at math.purdue.edu
Sun Apr 29 10:52:18 EDT 2012


Marc:

The compile times have gone up and down over the years with gcc.

Many of the decreases in compile time are due to improvements in algorithms or data structures.

Some are just from the compiler checking whether a function has too many basic blocks/gotos/labels/variables/... and turning off various optimizations that would take too long given the current algorithms and data structures.  This is not always a bad idea: some optimizations are ineffective on very large functions, and the C files generated by Gambit *are* very large when compiled with -D___SINGLE_HOST.

On the other hand, the programmer may think that all the optimizations specified by -O2 are being applied to his/her code when, in fact, many of them are disabled.  I thought that the programmer should be given the option to be warned about this, so in 2000 I suggested the option

-Wdisabled-optimizations

which could be used to inform the programmer when a specific optimization is disabled for whatever reason.

The use of this warning proved not to be popular with gcc developers, and perhaps I'm the only one who ever suggested places where it could be used (a few places in the GCSE and constant propagation code).  It should be possible in a few hours work to track down every place where it is applicable (search for where the gcc code base checks various parameters that specify how big a function has to be before an optimization is disabled, and insert the warning there), but, really, the parameter checking code could be written so that this warning is inserted automatically in such places.

What I'm trying to say is that while this test gives a good study of gcc's default behavior, a broader study would be needed to see which optimizations are applied to Gambit-generated code, how long they take at compile time, and how much they benefit execution speed.

Brad


More information about the Gambit-list mailing list