A reminder for tomorrow's MITACS talk by Hugo Larochelle (Fri March 4th @ 14h00, AA3195). Hope to see you there !
---------- Forwarded message ---------- From: Guillaume Desjardins guillaume.desjardins@gmail.com Date: Tue, Mar 1, 2011 at 2:23 PM Subject: UdeM-McGill-MITACS machine learning seminar Fri March 4th @ 14h00, AA3195 To: lisa_seminaires@iro.umontreal.ca
A UdeM-McGill-MITACS machine learning seminar will be held this Friday, March 4th. The talk given by Hugo Larochelle, will take place from 14h00-15h00 in the room AA3195 (Université de Montréal). Hope to see you there !
Title: Two new autoencoders for distribution estimation and guided representation, Hugo Larochelle (University of Toronto)
Abstract:
In this talk, I'll describe two new autoencoder-like models, developed for two different problems.
The first is the estimation of distributions of high-dimensional data, for which the restricted Boltzmann machine (RBM) has been shown to be a powerful model. However, an RBM typically does not provide a tractable distribution estimator, since evaluating the probability it assigns to some given observation requires the computation of the partition function, which itself is usually intractable. The model I'll describe circumvents this difficulty by decomposing the joint distribution of observations into tractable conditional distributions and modeling each conditionals with a non-linear function similar to RBM conditionals. This model can also be interpreted as an autoencoder wired such that its output can be used to assign valid probabilities to observations. I'll present experiments showing that this new model outperforms other multivariate binary distribution estimators on several datasets and performs similarly to a large (but intractable) RBM.
The second problem is that of guiding an autoencoder toward representations that are more useful for particular discriminative tasks. A complementary challenge is finding codes that are explicitly invariant to irrelevant transformations of the data. I'll describe how this can be achieved by combining an autoencoder with a Gaussian process latent variable model. This enables an autoencoder's unsupervised representation to both incorporate relevant label information and ignore irrelevant variations. I'll describe experiments on several different datasets which show how both labels and nuisance variables can provide cues for useful latent representations.
This is joint work with Iain Murray, Jasper Snoek and Ryan Prescott Adams.