[Lisa_seminaires] [Tea-Talk] Devon Hjelm & Laurent Dinh, March 17, 14:30, AA6214

Dzmitry Bahdanau dimabgv at gmail.com
Lun 13 Mar 15:01:28 EDT 2017


Hi all,

At the next tea-talk Devon Hjelm (post-doc) and Laurent Dinh (PhD student)
from MILA will present their ICML submissions. Please come in great numbers
on *March 17 to AA6214 at 13:30*! The details are below.

*Speaker:* Devon Hjelm
*Title:* Boundary-Seeking Generative Adversarial Networks
*Abstract: *We introduce a novel approach to training generative
adversarial networks (GANs, Goodfellow et al., 2014), which stems from
reinterpreting the generator objective to match a target distribution that
converges to the data distribution at the limit of a perfect discriminator.
This objective can be interpreted as training the generator to produce
samples that lie on the decision boundary of the current discriminator in
training, and we call this method boundary-seeking GANs (BS-GAN). This
approach can be used to train a generator with discrete output in the case
that the generator is parametrized by a conditional distribution, and we
demonstrate this with discrete image data. We also observe that the
Gumbel-softmax trick does not work for training GANs with discrete data.
Finally, our approach suggests a new objective function even for
continuously valued data, and we demonstrate this with common image
datasets.

*Speaker:* Laurent Dinh
*Title: *Sharp Minima Can Generative For Deep Nets
*Abstract: *Despite their overwhelming capacity to overfit, deep learning
architectures tend to generalize relatively well to unseen data, allowing
them to be deployed in practice. However, explaining why this is the case
is still an open area of research. One standing hypothesis that is gaining
popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is
that the flatness of minima of the loss function found by stochastic
gradient based methods results in good generalization. This paper argues
that most notions of flatness are problematic for deep models and can not
be directly applied to explain generalization. Specifically, when focusing
on deep networks with rectifier units, we can exploit the particular
geometry of parameter space induced by the inherent symmetries that these
architectures exhibit to build equivalent models corresponding to
arbitrarily sharper minima. Or, depending on the definition of flatness, it
is the same for any given minimum. Furthermore, if we allow to
reparametrize a function, the geometry of its parameters can change
drastically without affecting its generalization properties.

Best,
Dima
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20170313/035e8da9/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires