In 4.2 we saw that the compile-time overhead for applications like Gmail is higher than for the V8 benchmark. The question arises of which portions of the compilation process contribute to this overall overhead. We break down compilation time into its multiple phases by analyzing the contents of the profile and attributing a point in the trace to one of nine portions of the codebase. Five of these phases (Assembler, Lithium, Hydrogen, AST, and Parser) correspond to the portions in Figure 1, and three (LowLevel, Shared, and Tracing) are work that is shared between multiple parts of the compiler. Figure5 illustrates the breakdown for the three summary configurations used previously.
An overall conclusion for Gmail is that much of this compilation time does not come from time spent in the optimizer. Rather, it comes from work that must be done in any case, even with the optimizing compiler turned off entirely. Of all the time allocation, the parser is the largest contributor. Though initially surprising, this is sensible in light of the much larger size of the Gmail source: based on internal counters, the parser handles over 14 times as much code in Gmail than in BenchM. While compilation overhead for BenchM does increase significantly as a result of time spent in the optimization path, the total overhead is small compared to the decrease in JavaScript execution time it produces. These results also support the running observation that the opportunity for optimization is limited in real-world applications.
http://www.mrcaps.com/proj/OptimizationEfficacy/site/