N'oubliez pas que c'est congé ce vendredi, alors il n'y aura pas de rencontre de groupe. La prochaine rencontre sera le vendredi 5 avril.
Profitez de la longue fin de semaine!
Marc
Our study extends previous studies by showing some consequences of these
differences. We compare the execution behavior of four application
classes, i.e., four JavaScript benchmark suites, the first pages of the
Alexa top-100 web sites, 22 use cases for three social networks, and
demo applications for the emerging HTML5 standard. Our results indicate
that just-in-time compilation often increases the execution time for web
applications, and that there are large differences in the execution
behavior between benchmarks and web applications at the bytecode level.
http://link.springer.com/chapter/10.1007%2F978-3-642-22233-7_35?LI=true
Erick
Voici le code utilisé par Ilari et Michael pour obtenir de l'information
sur le comportement de V8.
Erick
-------- Message original --------
Sujet: Re: Quantifying Optimization Efficacy
Date : Mon, 25 Mar 2013 14:03:40 -0400
De : Ilari Shafer <mrcaps(a)gmail.com>
Pour : Erick Lavoie <erick.lavoie(a)gmail.com>
Copie à : mmaass(a)cs.cmu.edu, ishafer(a)cs.cmu.edu
Not sure - for whatever reason we started developing on a git repo in
Assembla. If anything, it saved us from worrying too much about
usernames and passwords for the tests (I just scrubbed a few, oops) :)
Enjoy: https://github.com/mrcaps/optimization-efficacy
Cheers,
Ilari
On Mon, Mar 25, 2013 at 1:48 PM, Erick Lavoie <erick.lavoie(a)gmail.com
<mailto:erick.lavoie@gmail.com>> wrote:
Actually I was thinking of the second option but I was wondering if
you had a particular reason for not having done it before.
Thanks!
Erick
Le 13-03-25 13:38 , Ilari Shafer a écrit :
> We don't mind, although it'd be helpful to know what form you were
> thinking of:
> * taking pieces or all of it and putting them in the photon-js
> repo - go right ahead
> * starting a separate repo for it as-is: we'll put it up on
> github and send you where to fork it from (if only, selfishly, so
> that we can track who uses it)
>
> Cheers,
> Ilari
>
> On Sun, Mar 24, 2013 at 9:21 PM, Erick Lavoie
> <erick.lavoie(a)gmail.com <mailto:erick.lavoie@gmail.com>> wrote:
>
> Oh, do you guys mind if I put your implementation on github?
> I'll make sure to properly give you credit for the work ;-).
>
> Erick
>
>
> Le 13-03-24 19:50 , Ilari Shafer a écrit :
>> Hi Erick,
>>
>> Thanks for your message! It's great to see you're working on
>> an instrumentation tactic that isn't tied to a given browser,
>> and even without the optimization you're planning Photon is
>> really quite impressive in terms of low overhead (I would
>> have feared much more).
>>
>> We're quite happy to share the source; attached is an archive
>> of the source; the README.md in there has some additional
>> information.
>>
>> A few disclaimers belong up front :) -
>> * Much of this is significantly easier if you don't need
>> Chromium and can just use v8 to run the tests you're
>> interested in (v8bench, sunspider, ...). It looks that this
>> is the case with Photon?
>> * This was built against a copy of the tree around 11 months
>> ago, so your mileage may vary.
>> * If you want something to cite for motivation purposes
>> (e.g., importance of instrumentation for real-world code),
>> we'll push out a techreport of this given that multiple folks
>> have asked about it. I'll send you a note when it's ready.
>>
>> A quick overview of the instrumentation parts of the framework -
>> * Profiler: we turned it on with the
>> --js-flags="--prof-browser-mode --noprof-lazy" arguments to
>> Chromium (see src/chrome.py) and modified the profiler parse
>> scripts to emit some machine-readable json output (see
>> src/tools). The profile output is essentially the same as
>> what you'll see in the within-browser "profiles" tab.
>> * Counters: the source changes in src/v8/src are
>> modifications to the src/v8/src.orig sources from the v8
>> directory of Chromium. Note that we can't currently gather
>> counters and profiles at the same time.
>> (see README.md for more)
>>
>> Hope that helps; let us know if there's anything else we can
>> provide.
>>
>> Cheers,
>> Ilari
>>
>> On Sun, Mar 24, 2013 at 3:58 PM, Erick Lavoie
>> <erick.lavoie(a)gmail.com <mailto:erick.lavoie@gmail.com>> wrote:
>>
>> Hi Michael and Ilari,
>>
>> I just stumpled upon your web report on Quantifying
>> Optimization Efficacy and found it really interesting. I
>> am one of the authors of the Tachyon JS VM and I am
>> currently working on Photon, a metacircular VM for JS
>> run-time instrumentation which runs over V8, and we
>> currently face the problem of understanding where are the
>> bottlenecks of our approach in terms of performance.
>>
>> First, I wanted to thank you guys for having done that
>> work, I am sure there are insights that are going to be
>> useful for us. Second, can I have access to the source
>> code/patches/instrumented version of Chrome you used?
>> That could be really useful for us to nail down the
>> effect of our system on V8.
>>
>> Thanks!
>>
>> Erick
>>
>>
>
>
Il y a des insights vraiment intéressants sur le comportement de V8 dans
ce travail et la qualité du travail est vraiment impressionnante pour ce
qui semble être un travail de session pour un cours.
De plus, la documentation sur la définition initiale du projet, le suivi
et le rapport final sont vraiment intéressants. On pourrait s'en
inspirer pour nos propres projets.
Erick
*Instrumenting V8 to Measure the Efficacy of Dynamic Optimizations on
Production Code*
4.1 Optimization has limited benefit
[...]
4.2 Many optimizations diminish performance
[...]
4.3Reasons for Performance
Tables3
<http://www.mrcaps.com/proj/OptimizationEfficacy/site/#x1-140023>and4
<http://www.mrcaps.com/proj/OptimizationEfficacy/site/#x1-140034>summarize
the interesting counters for BenchM and Gmail respectively. In general,
the trend is towards more deoptimizations, stack interrupts, and
compiled code as optimizations get more aggressive. Ignoring the case
where optimizations don't occur, program counter to source code
looks-ups also occur more often and stubs are utilized less often as
optimization becomes more aggressive. The trend is towards performing
notably more operations in the compiler as the amount of optimization
increases. In particular, we measured the impact of increasing
deoptimization. By isolating time taken for deoptimization from the
profiler results, we find that foralways_opton BenchM the time required
to execute deoptimization increases from 0 to7333±107profiler ticks as
compared to a total of24726±193execution ticks --- a very significant
component of runtime.
In4.2
<http://www.mrcaps.com/proj/OptimizationEfficacy/site/#x1-130004.2>we
saw that the compile-time overhead for applications like Gmail is higher
than for the V8 benchmark. The question arises of which portions of the
compilation process contribute to this overall overhead. We break down
compilation time into its multiple phases by analyzing the contents of
the profile and attributing a point in the trace to one of nine portions
of the codebase. Five of these phases (Assembler, Lithium, Hydrogen,
AST, and Parser) correspond to the portions in Figure1
<http://www.mrcaps.com/proj/OptimizationEfficacy/site/#x1-40011>, and
three (LowLevel, Shared, and Tracing) are work that is shared between
multiple parts of the compiler. Figure5
<http://www.mrcaps.com/proj/OptimizationEfficacy/site/#x1-140045>illustrates
the breakdown for the three summary configurations used previously.
An overall conclusion for Gmail is that much of this compilation time
does not come from time spent in the optimizer. Rather, it comes from
work that must be done in any case, even with the optimizing compiler
turned off entirely. Of all the time allocation, the parser is the
largest contributor. Though initially surprising, this is sensible in
light of the much larger size of the Gmail source: based on internal
counters, the parser handles over 14 times as much code in Gmail than in
BenchM. While compilation overhead for BenchM does increase
significantly as a result of time spent in the optimization path, the
total overhead is small compared to the decrease in JavaScript execution
time it produces. These results also support the running observation
that the opportunity for optimization is limited in real-world applications.
http://www.mrcaps.com/proj/OptimizationEfficacy/site/
Trouvé en faisant une revue de littérature.
Guidelines:
1. Be quiet
2. Write it all down
3. Extract more details
4. Reserve time for conflict, and realize that you don't have to agree.
5. Don't ask for critique if you only want validation. If you want a
hug, just ask.
http://www.ac4d.com/2012/04/30/do-you-want-critique-or-a-hug/
Slides summary:
There are 2 sides to critique: giving & receiving.
The right intent (on both sides) is to try to understand the decisions
made so far and their impact toward meeting goals and objectives.
Critique is a skill. It takes practice to improve.
There are basic rules that should be followed to help ensure good critique.
Mutually understood and agreed upon goals are critical both when asking
for and giving critique.
Critique can be done both internally and with clients. Use up to 6
people for about 1 hour.
Be prepared to deal with difficult people. You will encounter them.
Critique can be incorporated into the design process both as an activity
and as part of other activities.
http://www.slideshare.net/adamconnor/discussing-design-the-art-of-critique
Erick
OOPSLA introduit cette année la soumission volontaire d'artéfact (VM,
code, données, etc.) reliés au papier.
/The high level goal of the Artifact Evaluation (AE) process is to
empower others to build upon the contributions of a paper./
La limite de soumission est le 1er juin pour l'artéfact et seuls les
auteurs dont l'article a été accepté en première phase sont invités à
soumettre un artéfact. La soumission d'un artéfact n'influence en rien
l'évaluation de l'article. L'artéfact va être évalué séparément et être
inclus dans une section "source materials" de la librairie digitale de
l'ACM.
/
/ /The artifact will be evaluated in relation to the //*expectations set
by the paper*//. Thus, in addition to just running the artifact, the
evaluators will read the paper and may try to tweak provided inputs and
create new ones, to test the limits of the system./
http://splashcon.org/2013/cfp/due-june-01-2013/665-oopsla-artifacts
Je me sens tout à coup un peu moins cynique envers le processus de revue!
Erick
Il y a eu deux présentations particulièrement intéressantes sur
l'analyse de performance aujourd'hui à ASPLOS. Voici les résumés et des
liens vers des version publiques des articles. Jamais deux sans trois,
alors j'y ajoute également un article de 2009, que Paul m'a déjà
recommandé si je me rappelle bien.
Ça vaudra probablement la peine que je prépare une présentation au lab
là-dessus.
Erick
*Why You Should Care About Quantile Regression* (ASPLOS 2013)
Research has shown that correctly conducting and analysing computer
performance experiments is difficult. This paper investigates what is
necessary to conduct successful computer performance evaluation by
attempting to repeat a prior experiment: the comparison between two
schedulers.
[...] we demonstrate the successful application of quantile regression,
a recent development in statistics, to computer performance experiments.
Quantile regression can provide more insight into the experiment than
ANOVA, with the additional benefit of being applicable to data from any
distribution. This property makes it especially useful in our field,
since non-normally distributed data is common in computer experiments.
https://uwaterloo.ca/embedded-software-group/sites/ca.embedded-software-gro…
*Stabilizer: Statistically Sound Performance Evaluation* (ASPLOS 2013)
[...]
The standard methodology is to compare execution times before and
after applying changes.
Unfortunately, modern architectural features make this approach
unsound. Statistically sound evaluation requires multiple samples
to test whether one can or cannot (with high confidence) reject the
null hypothesis that results are the same before and after. However,
caches and branch predictors make performance dependent on
machine-specific parameters and the exact layout of code, stack
frames, and heap objects. A single binary constitutes just one sample
from the space of program layouts, regardless of the number of runs.
Since compiler optimizations and code changes also alter layout, it
is currently impossible to distinguish the impact of an optimization
from that of its layout effects.
This paper presents STABILIZER, a system that enables the use of
the powerful statistical techniques required for sound performance
evaluation on modern architectures. STABILIZER forces executions
to sample the space of memory configurations by repeatedly rerandomizing
layouts of code, stack, and heap objects at runtime.
STABILIZER thus makes it possible to control for layout effects.
Re-randomization also ensures that layout effects follow a Gaussian
distribution, enabling the use of statistical tests like ANOVA. We
demonstrate STABILIZER's efficiency (< 7% median overhead) and
its effectiveness by evaluating the impact of LLVM's optimizations
on the SPEC CPU2006 benchmark suite. We find that, while -O2
has a significant impact relative to -O1, the performance impact of
-O3 over -O2 optimizations is indistinguishable from random noise.
http://people.cs.umass.edu/~charlie/stabilizer.pdf
<http://people.cs.umass.edu/%7Echarlie/stabilizer.pdf>
*Producing wrong data without doing anything obviously wrong* (ASPLOS 2009
This paper presents a surprising result: changing a seemingly innocuous
aspect of an experimental setup can cause a systems researcher to draw
wrong conclusions from an experiment. What appears to be an innocuous
aspect in the experimental setup may in fact introduce a significant
bias in an evaluation. This phenomenon is called measurement bias in the
natural and social sciences.
Our results demonstrate that measurement bias is significant and
commonplace in computer system evaluation. By significant we mean that
measurement bias can lead to a performance analysis that either
over-states an effect or even yields an incorrect conclusion. By
commonplace we mean that measurement bias occurs in all architectures
that we tried (Pentium 4, Core 2, and m5 O3CPU), both compilers that we
tried (gcc and Intel's C compiler), and most of the SPEC CPU2006 C
programs. Thus, we cannot ignore measurement bias. Nevertheless, in a
literature survey of 133 recent papers from ASPLOS, PACT, PLDI, and CGO,
we determined that none of the papers with experimental results
adequately consider measurement bias.
Inspired by similar problems and their solutions in other sciences, we
describe and demonstrate two methods, one for detecting (causal
analysis) and one for avoiding (setup randomization) measurement bias.
http://machine.cs.colorado.edu/klipto/mytkowicz-asplos09.pdf