Hi All!
Dima is going on an internship so I'll be organizing tea talks for the next couple of months!
This week, our very own
Anirudh will present his NIPS paper (congrats btw!) it will be at
AA6214 on Friday September 15th, 11AM Hope to see everyone there!
Title: Z-Forcing: Training Stochastic Recurrent Networks
Abstract: Many
efforts have been devoted to training generative latent variable models
with autoregressive decoders, such as recurrent neural networks (RNN).
Stochastic recurrent models have been successful in capturing the
variability observed in natural sequential data such as speech. We
propose a novel sequential latent variable model unifying successful
ideas from recently proposed architectures: each step in the sequence is
associated with a latent variable that is used to condition the
recurrent dynamics for future steps. Training is performed with
amortised variational inference where the approximate posterior is
augmented with a RNN that runs backward through the sequence. In
addition to maximizing the variational lower bound, we ease training of
the latent variables by adding an auxiliary cost which forces them to
reconstruct the state of the backward recurrent network. This provides
the latent variables with a task-independent objective that enhances the
performance of the overall model. Although being conceptually simple,
our model achieves state-of-the-art results on standard speech
benchmarks such as TIMIT and BLIZZARD and competitive performance on
sequential MNIST. Finally, we apply our model on language modeling in
the IMDB dataset. The auxiliary cost is crucial for learning
interpretable latent variables.
Michael