On 2011-10-31, at 6:12 PM, Marc Feeley wrote:
On 2011-10-31, at 5:44 PM, chevalma@iro.umontreal.ca wrote:
Since Marc and Bruno are working on our JS profiling effort, I started gathering a list of things I'd like to see profiled to guide my optimization efforts:
http://pointersgonewild.wordpress.com/2011/10/31/what-id-like-to-know-about-...
The (fairly long) list points to metrics I'd like to get information about. I realize that other team members probably have profiling requests of their own. I should be able to provide some help in implementing such analyses once the tool is ready.
The list seems like a good starting point.
Interesting list... What I would like to know is the *motivation* for each item in the list. In other words, what it is you think that information will be used for (which optimization it will justify).
+1. Without that, there's a high risk of not computing useful information.
Some items are not particularly useful as is and some refinement of the information is needed. For example, point 15 (How many objects are allocated during program runs.) doesn't seem very useful as-is. If I tell you for program X it is 1000 and program Y it is 10000, what does that tell you? Perhaps X is still allocating more memory than Y because the objects are bigger, so object allocation is more of a problem for X than Y. Perhaps all the objects are the same size but the object allocation is less critical for Y because the execution time is 100 times longer for Y (so Y allocates 10 times *fewer* objects per unit of time). It is not easy to infer the impact on performance and optimization just by looking at the *number* of objects allocated. So, what is an interesting measure or combination of measures in this case?
Things that I've found useful in the past: distribution of object sizes and allocation rate (measures one aspect of the pressure on the GC), although these are not obvious to implement simply with high level code transformations.
Also, be very careful with averages. You can't characterize "typical" JS programs by a single number using averages. At the very least it is more interesting to use distributions (for example, which proportion of the suite of programs allocate only objects with less than 1 field, less than 2, less than 3, etc). This tells a more interesting story than an average.
I agree with Marc that averages aren't that useful in practice. I'd compute distributions anytime they make sense. For example, 2 & 3 can be combined into a histogram with buckets for 0,1,2,3-10,11-50,... calls to a function (the buckets should be determined by the optimizations we can/would like to perform). Similarly, 4 & 5 can easily be combined. That being said, most of that list should be easy to compute with the new profiling infrastructure (even the eval bits). More about that tomorrow I hope...
Bruno