This week's seminar (see http://www.iro.umontreal.ca/article.php3? id_article=107&lang=en).
Also, you are all invited to the thesis defense of Nicolas Le Roux, right after this MITACS seminar, in the same room (see abstract below):
*************************
Learning Deep Hierarchies of Sparse and Invariant Features
by Yann LeCun, Courant Institute of Mathematical Science New York University
Location: Pavillon André Aisenstadt (UdeM), room 3195 Time: April 30th 2008, 14:00
A long-term goal of Machine Learning research is to solve highy complex "intelligent" tasks, such as visual perception auditory perception, and language understanding. To reach that goal, the ML community must solve two problems: the Deep Learning Problem, and the Partition Function Problem.
There is considerable theoretical and empirical evidence that complex tasks, such as invariant object recognition in vision, require "deep" architectures, composed of multiple layers of trainable non-linear modules. The Deep Learning Problem is related to the difficulty of training such deep architectures.
Several methods have recently been proposed to train (or pre-train) deep architectures in an unsupervised fashion. Each layer of the deep architecture is composed of an encoder which computes a feature vector from the input, and a decoder which reconstructs the input from the features. A large number of such layers can be stacked and trained sequentially, thereby learning a deep hierarchy of features with increasing levels of abstraction. The training of each layer can be seen as shaping an energy landscape with low valleys around the training samples and high plateaus everywhere else. Forming these high plateaus constitute the so-called Partition Function problem.
A particular class of methods for deep energy-based unsupervised learning will be described that solves the Partition Function problem by imposing sparsity constraints on the features. The method can learn multiple levels of sparse and overcomplete representations of data. When applied to natural image patches, the method produces hierarchies of filters similar to those found in the mammalian visual cortex.
An application to category-level object recognition with invariance to pose and illumination will be described (with a live demo). Another application to vision-based navigation for off-road mobile robots will be described (with videos). The system autonomously learns to discriminate obstacles from traversable areas at long range.
This is joint work with Y-Lan Boureau, Sumit Chopra, Raia Hadsell, Fu- Jie Huang, Koray Kavakcuoglu, and Marc’Aurelio Ranzato.
************************
Défense de thèse de Nicolas Le Roux:
Titre: Avancées théoriques sur la représentation et l’optimisation des réseaux de neurones
Quand et où: Mercredi le 30 avril, 15:30, Pavillon Aisenstadt, salle 3195
Directeur: Yoshua Bengio Jury: Pierre L'Ecuyer, Pascal Vincent Examinateur: Yann Le Cun
Résumé:
Les réseaux de neurones sont une classe d'algorithmes d'apprentissage très répandue dans le domaine de l'intelligence artificielle. L'engouement général qu'ils suscitèrent dans les années 80 s'estompa malheureusement du fait de la difficulté de leur optimisation. L'apparition des méthodes à noyau dans les années 90 accéléra ce phénomène.
Après avoir mis en exergue les limitations des méthodes à noyau et des algorithmes peu profonds en général, je présenterai plusieurs extensions des réseaux de neurones étendant leurs possibilités et facilitant leur optimisation. Enfin, j'effectuerai une analyse détaillée des algorithmes profonds, tout en présentant un algorithme de descente de gradient rapide permettant leur utilisation dans des applications où la vitesse de traitement est essentielle.
Afficher les réponses par date
lisa_seminaires@iro.umontreal.ca