[Lisa_teatalk] Fwd: Talk by Phil Bachman on Wednesday (18 March) at 13.30

Kyung Hyun Cho cho.k.hyun at gmail.com
Wed Mar 18 20:45:09 EDT 2015


Dear all,

Phil who gave a talk today sent us the slides and a bunch of videos showing
the samples from the trained models.

Best,
- K

---------- Forwarded message ----------
From: Phil Bachman <phil.bachman at gmail.com>
Date: Wed, Mar 18, 2015 at 8:24 PM
Subject: Re: Talk by Phil Bachman on Wednesday (18 March) at 13.30
To: Kyung Hyun Cho <cho.k.hyun at gmail.com>


I made some nicer video demos of my models trained on MNIST and TFD. A
README.txt is provided in the archive which says the following:

=====
These are results from VAEs that have been trained initially using the
normal
one step approach, and then fine-tuned using collaborative guidance to shape
the long-term behavior of the Markov chains constructed by feeding them back
into themselves.

The MNIST model was trained with 24x the normal KLd penalty.

The TFD models were trained with 1x and 15x the normal KLd penalty.

For both datasets I generated: a video of unrolled chain behavior, a set of
independent samples generated by passing the isotropic Gaussian prior
through
the learned conditional p(x | z), and a plot of how the learned system
comprising q(z | x) and p(x | z) balances between KLd and log-likelihood.

The chain videos show multiple independent runs of chains with restarts
every
100 steps. All nine chains were initialized with the same random example
from
the training set at each restart.

The MNIST behavior seems pretty good. TFD with 15x KLd regularization seems
to
stay near the real distribution well-enough, but lacks any real detail. TFD
with 1x KLd regularization (i.e. the standard free-energy) gets weird.
====

I thought some people might find this interesting. I've attached the demos
and a copy of the slides from my talk.

Thanks,
Phil Bachman

On Sat, Mar 14, 2015 at 4:42 PM, Kyung Hyun Cho <cho.k.hyun at gmail.com>
wrote:

> Dear all,
>
> Phil Bachman from our neighbouring McGill University will tell us about
> training generative models, bringing in recent advances of variational
> autoencoders and approximate Bayesian computation, in the view of policy
> learning. See below for the detail.
>
> Hope to see many of you there!
> - Cho
>
> ===
> Speaker: Phil Bachman (McGill University)
> Date/Time: 18 March 2015 @13.30
> Place: Z-200
> Title: Learning policies for generating data
> Abstract:
> We develop an approach to training generative models that draws together
> several current lines of research. Our approach is based on unrolling a
> variational auto-encoder into a Markov chain and shaping the chain’s
> trajectories using a technique inspired by recent work in Approximate
> Bayesian computation. We show that the global minimizer of the resulting
> objective is achieved when the generative model reproduces the target
> distribution. To allow finer control over the behavior of our models, we
> add a regularization term related to techniques used for shaping policy
> search in reinforcement learning. We present empirical results on the MNIST
> and TFD datasets which show that our approach exceeds the state-of-art
> performance in quantitative measurements and from a qualitative point of
> view.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20150318/956d07d0/attachment-0001.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: philb_gsns_and_vaes.pdf
Type: application/pdf
Size: 2283012 bytes
Desc: not available
Url : http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20150318/956d07d0/attachment-0001.pdf 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: philb_model_demos.tar.gz
Type: application/x-gzip
Size: 8692481 bytes
Desc: not available
Url : http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20150318/956d07d0/attachment-0001.gz 


More information about the Lisa_teatalk mailing list