This week's seminar (see http://www.iro.umontreal.ca/article.php3?
id_article=107&lang=en).
Also, you are all invited to the thesis defense of Nicolas Le Roux,
right after this MITACS seminar, in the same room (see abstract below):
*************************
Learning Deep Hierarchies of Sparse and Invariant Features
by Yann LeCun,
Courant Institute of Mathematical Science
New York University
Location: Pavillon André Aisenstadt (UdeM), room 3195
Time: April 30th 2008, 14:00
A long-term goal of Machine Learning research is to solve highy
complex "intelligent" tasks, such as visual perception auditory
perception, and language understanding. To reach that goal, the ML
community must solve two problems: the Deep Learning Problem, and the
Partition Function Problem.
There is considerable theoretical and empirical evidence that complex
tasks, such as invariant object recognition in vision, require "deep"
architectures, composed of multiple layers of trainable non-linear
modules. The Deep Learning Problem is related to the difficulty of
training such deep architectures.
Several methods have recently been proposed to train (or pre-train)
deep architectures in an unsupervised fashion. Each layer of the deep
architecture is composed of an encoder which computes a feature
vector from the input, and a decoder which reconstructs the input
from the features. A large number of such layers can be stacked and
trained sequentially, thereby learning a deep hierarchy of features
with increasing levels of abstraction. The training of each layer can
be seen as shaping an energy landscape with low valleys around the
training samples and high plateaus everywhere else. Forming these
high plateaus constitute the so-called Partition Function problem.
A particular class of methods for deep energy-based unsupervised
learning will be described that solves the Partition Function problem
by imposing sparsity constraints on the features. The method can
learn multiple levels of sparse and overcomplete representations of
data. When applied to natural image patches, the method produces
hierarchies of filters similar to those found in the mammalian visual
cortex.
An application to category-level object recognition with invariance
to pose and illumination will be described (with a live demo).
Another application to vision-based navigation for off-road mobile
robots will be described (with videos). The system autonomously
learns to discriminate obstacles from traversable areas at long range.
This is joint work with Y-Lan Boureau, Sumit Chopra, Raia Hadsell, Fu-
Jie Huang, Koray Kavakcuoglu, and Marc’Aurelio Ranzato.
************************
Défense de thèse de Nicolas Le Roux:
Titre: Avancées théoriques sur la représentation et l’optimisation
des réseaux de neurones
Quand et où: Mercredi le 30 avril, 15:30, Pavillon Aisenstadt, salle
3195
Directeur: Yoshua Bengio
Jury: Pierre L'Ecuyer, Pascal Vincent
Examinateur: Yann Le Cun
Résumé:
Les réseaux de neurones sont une classe d'algorithmes d'apprentissage
très répandue dans le domaine de l'intelligence artificielle.
L'engouement général qu'ils suscitèrent dans les années 80 s'estompa
malheureusement du fait de la difficulté de leur optimisation.
L'apparition des méthodes à noyau dans les années 90 accéléra ce
phénomène.
Après avoir mis en exergue les limitations des méthodes à noyau et
des algorithmes
peu profonds en général, je présenterai plusieurs extensions des
réseaux de neurones
étendant leurs possibilités et facilitant leur optimisation.
Enfin, j'effectuerai une analyse détaillée des algorithmes profonds,
tout en présentant
un algorithme de descente de gradient rapide permettant leur
utilisation dans des
applications où la vitesse de traitement est essentielle.
Next week's seminar (see http://www.iro.umontreal.ca/article.php3?
id_article=107&lang=en):
Learning Deep Hierarchies of Sparse and Invariant Features
by Yann LeCun,
Courant Institute of Mathematical Science
New York University
Location: Pavillon André Aisenstadt (UdeM), room 3195
Time: April 30th 2008, 14:00
A long-term goal of Machine Learning research is to solve highy
complex "intelligent" tasks, such as visual perception auditory
perception, and language understanding. To reach that goal, the ML
community must solve two problems: the Deep Learning Problem, and the
Partition Function Problem.
There is considerable theoretical and empirical evidence that complex
tasks, such as invariant object recognition in vision, require "deep"
architectures, composed of multiple layers of trainable non-linear
modules. The Deep Learning Problem is related to the difficulty of
training such deep architectures.
Several methods have recently been proposed to train (or pre-train)
deep architectures in an unsupervised fashion. Each layer of the deep
architecture is composed of an encoder which computes a feature
vector from the input, and a decoder which reconstructs the input
from the features. A large number of such layers can be stacked and
trained sequentially, thereby learning a deep hierarchy of features
with increasing levels of abstraction. The training of each layer can
be seen as shaping an energy landscape with low valleys around the
training samples and high plateaus everywhere else. Forming these
high plateaus constitute the so-called Partition Function problem.
A particular class of methods for deep energy-based unsupervised
learning will be described that solves the Partition Function problem
by imposing sparsity constraints on the features. The method can
learn multiple levels of sparse and overcomplete representations of
data. When applied to natural image patches, the method produces
hierarchies of filters similar to those found in the mammalian visual
cortex.
An application to category-level object recognition with invariance
to pose and illumination will be described (with a live demo).
Another application to vision-based navigation for off-road mobile
robots will be described (with videos). The system autonomously
learns to discriminate obstacles from traversable areas at long range.
This is joint work with Y-Lan Boureau, Sumit Chopra, Raia Hadsell, Fu-
Jie Huang, Koray Kavakcuoglu, and Marc’Aurelio Ranzato.
Next week's seminar (see http://www.iro.umontreal.ca/article.php3?
id_article=107&lang=en):
Restricted Boltzmann machines: Performance and behavior on image
databases and future plans
by Karol Gregor,
California Institute of Technology
Location: Pavillon André Aisenstadt (UdeM), room 3195
Time: April 14th 2008, 11:30am
Inspired by amazing capabilities of cortex and at the same time its
relatively homogeneous hierarchical structure, it is a worthwhile
pursuit to develop algorithms that can be repeated in a hierarchical
fashion and applied to large class of problems including vision,
speech recognition and motor control. Restricted Boltzmann machines
are a very capable tool and a good step in this direction. First, I
will discuss our study of the restricted Boltzmann machines on image
databases of Scenes and Caltech 256 as applied to bag of words of
sift features. I will show dependencies on various parameters and
comparison of performance to other methods. There are other
interesting conclusions too: For a given amount of labeled data, the
performance improves if the system is pre-trained on a larger number
of images; Neurons seem to appear after unsupervised pre-training
that explicitly represent certain categories (to some extent). Next I
will discuss a simple experiment with temporal sequences. Then I will
put this problem into the context of cortical computations and
outline a plan that I believe is very reasonable to undertake in the
future.
This week's seminar (see http://www.iro.umontreal.ca/article.php3?
id_article=107&lang=en):
Aggregate Markov Decision Processes
by Hasan Mirza,
McGill University
Location: McConnell Engineering (McGill), room 103
Time: April 8th 2008, 10am
We consider a special type of Markov decision problem in which an
agent maintains an infiitely divisible collection of identical
"machines," each described by a standard finite-state MDP. The agent
is subject to constraints on fraction of the machines receiving each
available action. We model the collection of MDPs as a single MDP by
looking at the frequency at which each state is observed in each
underlying MDP. Although this MDP has continuous state and action
spaces, its transitions are deterministic, and its structure leads to
interesting properties such as a convex value function. We present a
linear-programming receding-horizon control technique for use under
the discountedcost performance criterion, and investigate its
behaviour through experiments on example problems. The experimental
results show that the technique is viable on practically-sized
problems. Finally, we present a probabilistic result that relates the
frequency MDP to the model where each machine is treated separately
and the collection is not infinitely divisible.
Next week's seminar (see http://www.iro.umontreal.ca/article.php3?
id_article=107&lang=en):
Aggregate Markov Decision Processes
by Hasan Mirza,
McGill University
Location: McConnell Engineering (McGill), room 103
Time: April 8th 2008, 10am
We consider a special type of Markov decision problem in which an
agent maintains an infiitely divisible collection of identical
"machines," each described by a standard finite-state MDP. The agent
is subject to constraints on fraction of the machines receiving each
available action. We model the collection of MDPs as a single MDP by
looking at the frequency at which each state is observed in each
underlying MDP. Although this MDP has continuous state and action
spaces, its transitions are deterministic, and its structure leads to
interesting properties such as a convex value function. We present a
linear-programming receding-horizon control technique for use under
the discountedcost performance criterion, and investigate its
behaviour through experiments on example problems. The experimental
results show that the technique is viable on practically-sized
problems. Finally, we present a probabilistic result that relates the
frequency MDP to the model where each machine is treated separately
and the collection is not infinitely divisible.