[Lisa_seminaires] [Lisa_labo] [Tea-Talk] Devon Hjelm & Laurent Dinh, March 17, 13:30, AA6214

Junyoung Chung elecegg at gmail.com
Ven 17 Mar 13:21:11 EDT 2017


The talks are happening soon!

--Junyoung

On Fri, Mar 17, 2017 at 9:04 AM, Dzmitry Bahdanau <dimabgv at gmail.com> wrote:

> Reminder: this tea-talk is today!
>
>
> On Mon, 13 Mar 2017 at 16:51 Dzmitry Bahdanau <dimabgv at gmail.com> wrote:
>
>> Sorry for the wrong information in the subject of the previous email. The
>> talk will take place at *13:30*.
>>
>> Dima
>>
>> On Mon, 13 Mar 2017 at 15:01 Dzmitry Bahdanau <dimabgv at gmail.com> wrote:
>>
>> Hi all,
>>
>> At the next tea-talk Devon Hjelm (post-doc) and Laurent Dinh (PhD
>> student) from MILA will present their ICML submissions. Please come in
>> great numbers on *March 17 to AA6214 at 13:30*! The details are below.
>>
>> *Speaker:* Devon Hjelm
>> *Title:* Boundary-Seeking Generative Adversarial Networks
>> *Abstract: *We introduce a novel approach to training generative
>> adversarial networks (GANs, Goodfellow et al., 2014), which stems from
>> reinterpreting the generator objective to match a target distribution that
>> converges to the data distribution at the limit of a perfect discriminator.
>> This objective can be interpreted as training the generator to produce
>> samples that lie on the decision boundary of the current discriminator in
>> training, and we call this method boundary-seeking GANs (BS-GAN). This
>> approach can be used to train a generator with discrete output in the case
>> that the generator is parametrized by a conditional distribution, and we
>> demonstrate this with discrete image data. We also observe that the
>> Gumbel-softmax trick does not work for training GANs with discrete data.
>> Finally, our approach suggests a new objective function even for
>> continuously valued data, and we demonstrate this with common image
>> datasets.
>>
>> *Speaker:* Laurent Dinh
>> *Title: *Sharp Minima Can Generative For Deep Nets
>> *Abstract: *Despite their overwhelming capacity to overfit, deep
>> learning architectures tend to generalize relatively well to unseen data,
>> allowing them to be deployed in practice. However, explaining why this is
>> the case is still an open area of research. One standing hypothesis that is
>> gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al.
>> (2017), is that the flatness of minima of the loss function found by
>> stochastic gradient based methods results in good generalization. This
>> paper argues that most notions of flatness are problematic for deep models
>> and can not be directly applied to explain generalization. Specifically,
>> when focusing on deep networks with rectifier units, we can exploit the
>> particular geometry of parameter space induced by the inherent symmetries
>> that these architectures exhibit to build equivalent models corresponding
>> to arbitrarily sharper minima. Or, depending on the definition of flatness,
>> it is the same for any given minimum. Furthermore, if we allow to
>> reparametrize a function, the geometry of its parameters can change
>> drastically without affecting its generalization properties.
>>
>> Best,
>> Dima
>>
>>
> _______________________________________________
> Lisa_labo mailing list
> Lisa_labo at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20170317/526eb743/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires