Hi all,
Our tea-talk tomorrow has been moved to 11am, same room (AA6214). Hope to you see there!
Dima
On Mon, 4 Sep 2017 at 21:02 Dzmitry Bahdanau dimabgv@gmail.com wrote:
Hi all,
Our next speaker is Karan Grewal, who is currently an intern at MILA. The talk will take place at *AA6214, at 13:45, September 8*. Hope to see many of you there!
*Title: *Variance Regularizing Adversarial Learning
*Abstract:* Generative Adversarial Networks have become a breakthrough towards generating synthetic data in recent years, however many problems may arise during training which result in the generator being unable to learn the desired data distribution/manifold. Most notably, the discriminator often overpowers the generator, which leads to a poor training signal and vanishing gradients. We study the effects of the discriminator's unnormalized output distribution on the generator's ability to learn. To combat this problem, we propose Variance Regularizing Adversarial Learning -- regularizing the discriminator's unnormalized output distribution to fit a mixture of gaussians, yielding higher variance and hence a lipschitz-like discriminator function. To do this, we provide two methods: (1) using the KL-divergence as a penalty in the discriminator's loss and (2) playing meta-adversarial games to force the discriminator to fit a MoG. We show that our models are robust in the presence of high training ratios and compare them to other variants of GANs.
*Bio:* Karan Grewal is an intern at MILA and a senior undergraduate student at the University of Toronto. He is working on improving and stabilizing generative models with Devon Hjelm and Yoshua Bengio. Previously, he worked on applying machine learning to social contexts.
Dima