[Lisa_seminaires] [mila-tous] [room change] [Tea Talk] Gauthier Gidel (Mila) Fri October 26 2018 11:00 PCM Z315

saikrishna gottipati saikrishnagv1996 at gmail.com
Ven 26 Oct 11:04:59 EDT 2018


Last change for the day. It's happening in 3195. Sorry for the confusion!

On Fri, 26 Oct 2018, 10:56 Rim Assouel, <rim.assouel at gmail.com> wrote:

> Sorry last minute room change : this is happening in AA 1409
>
> Début du message réexpédié :
>
> *De: *Rim Assouel <rim.assouel at gmail.com>
> *Objet: **[reminder] [Tea Talk] Gauthier Gidel (Mila) Fri October 26 2018
> 11:00 PCM Z315*
> *Date: *26 octobre 2018 à 10:42:18 UTC−4
> *À: *mila-tous at mila.quebec, lisa_teatalk at iro.umontreal.ca,
> lisa_seminaires at iro.umontreal.ca, teatalk-orgs at lisa.iro.umontreal.ca
>
> Reminder that it is happening in 15 min :)
>
> Début du message réexpédié :
>
> *De: *Rim Assouel <rim.assouel at gmail.com>
> *Objet: **[Tea Talk] Gauthier Gidel (Mila) Fri October 26 2018 11:00 PCM
> Z315*
> *Date: *24 octobre 2018 à 10:03:37 UTC−4
> *À: *mila-tous at mila.quebec, lisa_teatalk at iro.umontreal.ca,
> lisa_seminaires at iro.umontreal.ca, teatalk-orgs at lisa.iro.umontreal.ca
>
> This week we have our own *Gauthier Gidel*  giving a talk on *Fri October
> 26 2018* at *11:00* in room *PCM Z315*
>
> Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>?
> Yes
> Recorded?
>
> Yes you GAN learn a lot going to this talk (too easy?)
>
> See you there!
> Rim and Sai
>
> *TITLE* A Variational Inequality Perspective on Generative Adversarial
> Networks
>
> *KEYWORDS*
> GANs, variational inequality, mini-max optimization
>
> *ABSTRACT*
> Generative adversarial networks (GANs) form a generative modeling approach
> known for producing appealing samples, but they are notably difficult to
> train. One common way to tackle this issue has been to propose new
> formulations of the GAN objective. Yet, surprisingly few studies have
> looked at optimization methods designed for this adversarial training. In
> this work, we cast GAN optimization problems in the general variational
> inequality framework. Tapping into the mathematical programming literature,
> we counter some common misconceptions about the difficulties of saddle
> point optimization and propose to extend methods designed for variational
> inequalities to the training of GANs. We apply averaging, extrapolation and
> a novel computationally cheaper variant that we call extrapolation from the
> past to the stochastic gradient method (SGD) and Adam.
>
> *BIO*
> Gauthier Gidel received the Diplôme de l’École Normale Supérieure in 2017
> (ULM MPI2013) and the Master of Science MVA from École Normale supérieur
> Paris-Saclay in 2016. Gauthier is currently pursuing his PhD at Mila and
> DIRO from Université de Montréal under the supervision of Simon
> Lacoste-Julien.Gauthier’s PhD thesis topic revolves around saddle point
> optimization (a.k.a mini-max problems) for machine learning and more
> generally variational inequalities on which Gauthier has published several
> papers [Gidel et al. 2017, Gidel et al. 2018].
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "MILA Tous" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mila-tous+unsubscribe at mila.quebec.
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: <http://mailman.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20181026/0ca3386e/attachment-0001.html>


Plus d'informations sur la liste de diffusion Lisa_seminaires