Geoff Hinton is giving a talk next week (friday, oct. 17th) at McGill, organized by CRM Applied Math Lab and McGill CSE, http://www.dms.umontreal.ca/~mathapp/
Location: McGill, Burnside Hall 1205 Time: 14:30 Speaker: Geoffrey Hinton http://www.cs.toronto.edu/~hinton/ Title: THE NEXT GENERATION OF NEURAL NETWORKS
Abstract: In the 1980's, new learning algorithms for neural networks promised to solve difficult classification tasks, like speech or object recognition, by learning many layers of non-linear features. The results were disappointing for two reasons: There was never enough labeled data to learn millions of complicated features and the learning was much too slow in deep neural networks with many layers of features. These problems can now be overcome by learning one layer of features at a time and by changing the goal of learning. Instead of trying to predict the labels, the learning algorithm tries to create a generative model that produces data which looks just like the unlabeled training data. After learning many layers of features in this way, a relatively small amount of labeled data can then be used to fine-tune the features to give better discrimination. These new neural networks outperform other machine learning methods when labeled data is scarce but unlabeled data is plentiful. I will describe an application to recognizing 3-D shapes by Vinod Nair and an application to generating human motion by Graham Taylor.
Afficher les réponses par date
lisa_seminaires@iro.umontreal.ca