Il y a eu deux présentations particulièrement intéressantes sur
l'analyse de performance aujourd'hui à ASPLOS. Voici les résumés et des
liens vers des version publiques des articles. Jamais deux sans trois,
alors j'y ajoute également un article de 2009, que Paul m'a déjà
recommandé si je me rappelle bien.
Ça vaudra probablement la peine que je prépare une présentation au lab
là-dessus.
Erick
*Why You Should Care About Quantile Regression* (ASPLOS 2013)
Research has shown that correctly conducting and analysing computer
performance experiments is difficult. This paper investigates what is
necessary to conduct successful computer performance evaluation by
attempting to repeat a prior experiment: the comparison between two
schedulers.
[...] we demonstrate the successful application of quantile regression,
a recent development in statistics, to computer performance experiments.
Quantile regression can provide more insight into the experiment than
ANOVA, with the additional benefit of being applicable to data from any
distribution. This property makes it especially useful in our field,
since non-normally distributed data is common in computer experiments.
https://uwaterloo.ca/embedded-software-group/sites/ca.embedded-software-gro…
*Stabilizer: Statistically Sound Performance Evaluation* (ASPLOS 2013)
[...]
The standard methodology is to compare execution times before and
after applying changes.
Unfortunately, modern architectural features make this approach
unsound. Statistically sound evaluation requires multiple samples
to test whether one can or cannot (with high confidence) reject the
null hypothesis that results are the same before and after. However,
caches and branch predictors make performance dependent on
machine-specific parameters and the exact layout of code, stack
frames, and heap objects. A single binary constitutes just one sample
from the space of program layouts, regardless of the number of runs.
Since compiler optimizations and code changes also alter layout, it
is currently impossible to distinguish the impact of an optimization
from that of its layout effects.
This paper presents STABILIZER, a system that enables the use of
the powerful statistical techniques required for sound performance
evaluation on modern architectures. STABILIZER forces executions
to sample the space of memory configurations by repeatedly rerandomizing
layouts of code, stack, and heap objects at runtime.
STABILIZER thus makes it possible to control for layout effects.
Re-randomization also ensures that layout effects follow a Gaussian
distribution, enabling the use of statistical tests like ANOVA. We
demonstrate STABILIZER's efficiency (< 7% median overhead) and
its effectiveness by evaluating the impact of LLVM's optimizations
on the SPEC CPU2006 benchmark suite. We find that, while -O2
has a significant impact relative to -O1, the performance impact of
-O3 over -O2 optimizations is indistinguishable from random noise.
http://people.cs.umass.edu/~charlie/stabilizer.pdf
<http://people.cs.umass.edu/%7Echarlie/stabilizer.pdf>
*Producing wrong data without doing anything obviously wrong* (ASPLOS 2009
This paper presents a surprising result: changing a seemingly innocuous
aspect of an experimental setup can cause a systems researcher to draw
wrong conclusions from an experiment. What appears to be an innocuous
aspect in the experimental setup may in fact introduce a significant
bias in an evaluation. This phenomenon is called measurement bias in the
natural and social sciences.
Our results demonstrate that measurement bias is significant and
commonplace in computer system evaluation. By significant we mean that
measurement bias can lead to a performance analysis that either
over-states an effect or even yields an incorrect conclusion. By
commonplace we mean that measurement bias occurs in all architectures
that we tried (Pentium 4, Core 2, and m5 O3CPU), both compilers that we
tried (gcc and Intel's C compiler), and most of the SPEC CPU2006 C
programs. Thus, we cannot ignore measurement bias. Nevertheless, in a
literature survey of 133 recent papers from ASPLOS, PACT, PLDI, and CGO,
we determined that none of the papers with experimental results
adequately consider measurement bias.
Inspired by similar problems and their solutions in other sciences, we
describe and demonstrate two methods, one for detecting (causal
analysis) and one for avoiding (setup randomization) measurement bias.
http://machine.cs.colorado.edu/klipto/mytkowicz-asplos09.pdf