Hi gang,
This week we have two more practice talks for the upcoming NIPS conference. Ian and Guillaume (take 2) will be presenting their recent work. Come with questions, comments and critiques.
When: Wednesday Dec. 7nd, 14h00 Where: AA3195
Speaker: Ian Goodfellow Title: Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery
We introduce spike-and-slab sparse coding (S3C), an unsupervised feature discovery algorithm. S3C is based on a generative model that resembles both the spike-and-slab RBM and sparse coding. Since exact inference in this model is intractable, we derive a structured variational inference procedure and employ a variational EM training algorithm. We demonstrate that this approach improves upon the supervised learning capabilities of both sparse coding and the ssRBM on the CIFAR-10 dataset. We evaluate our approach’s potential for semi-supervised learning on subsets of CIFAR-10. We demonstrate state-of-the art self-taught learning performance on the STL-10 dataset and use our method to win the NIPS 2011 Workshop on Challenges In Learning Hierarchical Models’ Transfer Learning Challenge.
+++++++++++++++++++++++++++++++++++++++++++++++++++++ Speaker: Guillaume Desjardins Title: On Tracking The Partition Function
Markov Random Fields (MRFs) have proven very powerful both as density estimators and feature extractors for classification. However, their use is often limited by an inability to estimate the partition function $Z$. In this paper, we exploit the gradient descent training procedure of restricted Boltzmann machines (a type of MRF) to {\bf track} the log partition function during learning. Our method relies on two distinct sources of information: (1) estimating the change $\Delta Z$ incurred by each gradient update, (2) estimating the difference in $Z$ over a small set of tempered distributions using bridge sampling. The two sources of information are then combined using an inference procedure similar to Kalman filtering. Learning MRFs through Tempered Stochastic Maximum Likelihood, we can estimate $Z$ using no more temperatures than are required for learning. Comparing to both exact values and estimates using annealed importance sampling (AIS), we show on several datasets that our method is able to accurately track the log partition function. In contrast to AIS, our method provides this estimate at each time-step, at a computational cost similar to that required for training alone.
Cheers, Aaron
Afficher les réponses par date