[Lisa_seminaires] REMINDER: UdeM-McGill-MITACS machine learning seminar Tues March 25th, 10:00am, PAA3195

Hugo Larochelle larocheh at iro.umontreal.ca
Lun 24 Mar 17:02:01 EDT 2008


This week, we are having a double feature seminar (see http:// 
www.iro.umontreal.ca/article.php3?id_article=107&lang=en):


Image Classification with Higher-Order Neural Models

and

Deep Learning with Denoising Autoencoders


by James Bergstra and Pascal Vincent,
Département d’Informatique et de Recherche Opérationnelle
Université de Montréal

Location: Pavillon André Aisenstadt (UdeM), room 3195
Time: March 25th 2008, 10h00

First talk, by James Bergstra:
Neural network research in machine learning grew out of theories from
computational neuroscience from the 1960s. While the class of affine-
sigmoidal feature extractors has been studied extensively since the
mid 1980s, the computational neuroscience community has moved on to
new models that are qualitatively different and more descriptive,
without being substantially more computationally expensive. This
paper brings a particular model proposed in (Rust, 2005) for low
level neurons in the macaque visual system into a machine-learning
context: we evaluate their model as an activation function (feature-
extractor) for single-layer neural networks that perform image
classification. The function we evaluate is somewhat similar to the
higher-order processing units discussed in (Minsky, 1969) and the
Sigma-Pi units, but avoids the computational difficulties associated
with these models by representing the second-order interaction
weights with a low-rank positive semi-definite matrix, and avoids the
learning difficulties associated with these models by using a gentler
non-linearity than the logistic sigmoid. Remarkably good comparative
results are obtained on three image classification tasks including 1.4
\% error on MNIST using a single-layer network. These results suggest
that a single hidden layer neural network equipped with this neuron
model can capture important patterns that escape standard models such
as sigmoid neural networks and support vector machines based on
gaussian and polynomial kernels.

Second talk, by Pascal Vincent:
Previous work has shown that the difficulties in learning deep
generative or discriminative models can be overcome by an initial
unsupervised learning step that maps inputs to useful intermediate
representations. We introduce and motivate a new training principle
for unsupervised learning of a representation based on the idea of
making the learned representations robust to partial corruption of
the input pattern. This approach can be used to train autoencoders,
and these denoising autoencoders can be stacked to initialize deep
architectures. The algorithm can be motivated from a manifold
learning and information theoretic perspective or from a generative
model perspective. Comparative experiments clearly show the
surprising advantage of corrupting the input of autoencoders on a
pattern classification benchmark suite.



Plus d'informations sur la liste de diffusion Lisa_seminaires