[Lisa_seminaires] UdeM-McGill-MITACS machine learning seminar Fri Nov. 20 at 14h30,

Dumitru Erhan dumitru.erhan at umontreal.ca
Jeu 12 Nov 14:05:52 EST 2009


Next week's seminar (see
http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):

Unlocking Brain-Inspired Computer Vision: a Multi-Disciplinary,
High-Throughput Approach

by Nicolas Pinto
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology

Location: Pavillon André-Aisenstadt (UdeM), room 3195
Time: Friday, November 20, 14h30

The construction of artificial vision systems and the study of
biological vision are naturally intertwined as they represent
simultaneous efforts to forward and reverse engineer systems with
similar goals. While exploration of the neuronal substrates of visual
processing provides clues and inspiration for artificial systems,
artificial systems can in turn serve as important generators of new
ideas and working hypotheses. However, while systems neuroscience has
so far provided inspiration for some of the "broad-stroke" properties
of the visual system (e.g. hierarchical organization, synaptic
integration of inputs and threshold, normalization, plasticity, etc),
much is still unknown. Even for those qualitative properties that most
biological-inspired models hold in common, experimental data currently
provide little constraint on their key parameters. Consequently, it is
difficult to truly evaluate a set of computational ideas, since the
performance of any one model depends strongly on its particular
instantiation - e.g. the size of the pooling kernels, the number of
units per layer, exponents in normalization operations, etc. Since the
number of such parameters (explicit or implicit) is very large, and
the typical computational cost of evaluating one particular model is
high, the space of possible model instantiations usually goes largely
unexplored. Compounding the problem, even if a set of computational
ideas are on the right track, the instantiated "scale" of those ideas
is typically small (e.g. in terms of dimensionality and amount of
learning experience provided). Thus, when a model fails to approach
the abilities of the visual system, we are left uncertain whether this
failure is because we are missing a fundamental idea, or because the
correct "parts" have not been tuned correctly, assembled at sufficient
scale, or provided with sufficient natural experience.

To pave a possible way forward, we have begun developing a
high-throughput approach to expansively explore a large range of
biologically-inspired models - including models of larger, more
realistic scale - leveraging recent advances in commodity stream
processing hardware (high-end GPUs and Playstation 3’s Cell
processors) and scientific cloud computing (e.g. Amazon EC2). In
analogy to high-throughput screening approaches in molecular biology
and genetics, we generated and trained thousands of potential network
architectures and parameter instantiations, and "screened" the visual
representations produced by these models using an object recognition
task. From these candidate models, the most promising were selected
for further analysis. We have shown that this approach can yield
significant, reproducible gains in performance across an array of
basic object recognition tasks, consistently outperforming a variety
of state-of-the-art purpose-built vision systems from the literature,
and that it can offer insight into which computational ideas are most
important for achieving this performance.

As the scale of available computational power continues to expand, we
believe that this approach holds great potential both for accelerating
progress in artificial vision, and for generating new,
experimentally-testable hypotheses for the study of biological vision.


Plus d'informations sur la liste de diffusion Lisa_seminaires