This week's seminar (see http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):
Recent developments in learning deep networks
by Geoffrey Hinton University of Toronto and Canadian Institute for Advanced Research
Location: Pavillon André-Aisenstadt (UdeM), room 3195 Time: March 13th 2009, 10h30
It is possible to learn deep belief nets that are good at object recognition by composing a number of simple modules, each of which contains only one layer of hidden units. The layers are learned one at a time by treating the hidden activities of one module as the data for training the next module.
I will start by describing a new method for learning each module that is faster than previous methods and gives better performance on test data. Then I will describe a more powerful basic module for deep learning. The module allows third-order, multiplicative interactions in which hidden units gate the pairwise interactions between visible units. A technique for factoring the third-order interactions leads to a learning module that has a simple learning rule based on pairwise correlations. This module looks remarkably like modules that have been proposed by both biologists trying to explain the responses of neurons and engineers trying to create systems that can recognize objects.
The talk will describe joint work with Tijmen Tieleman and Roland Memisevic.