Dear all,
Phil Bachman from our neighbouring McGill University will tell us about training generative models, bringing in recent advances of variational autoencoders and approximate Bayesian computation, in the view of policy learning. See below for the detail.
Hope to see many of you there! - Cho
=== Speaker: Phil Bachman (McGill University) Date/Time: 18 March 2015 @13.30 Place: Z-200 Title: Learning policies for generating data Abstract: We develop an approach to training generative models that draws together several current lines of research. Our approach is based on unrolling a variational auto-encoder into a Markov chain and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over the behavior of our models, we add a regularization term related to techniques used for shaping policy search in reinforcement learning. We present empirical results on the MNIST and TFD datasets which show that our approach exceeds the state-of-art performance in quantitative measurements and from a qualitative point of view.