[Lisa_seminaires] [Lisa_labo] [Extra Tea-Talk] Martin Arjovsky, March 7, 14:30, AA6214

Martin Arjovsky martinarjovsky at gmail.com
Mar 7 Mar 18:45:46 EST 2017


Here are the final slides in case anyone wants them :)

https://docs.google.com/presentation/d/1yqVPqjplEcxSRrqRAIaQtGJR5Z94e0-kN9LlNgprEpM/edit?usp=sharing

PS: If you have any questions send me an email or stop me on the halls!
(I'm here till Thursday next week).

Best!
Martin

2017-03-07 14:09 GMT-05:00 Dzmitry Bahdanau <dimabgv at gmail.com>:

> This is a reminder about the additional tea-talk, which is going to happen
> in 20 minutes.
>
> Here is also a *short bio* from Martin:
>
> I'm Martin Arjovsky, I'm currently doing my PhD at New York University,
> and being advised by Léon Bottou. I did my undergraduate and master's in
> the University of Buenos Aires, Argentina (my home country). In the middle
> I took a year off to do internships in different places (Google, Facebook,
> Microsoft, and the Université de Montréal). My master's thesis advisor was
> Yoshua Bengio, who also advised me during my stay at UdeM.
> In general I'm interested in the intersection between learning and
> mathematics, how we can ground the different learning processes that are
> involved in different problems, and leverage this knowledge to develop
> better algorithms.
>
> On Sun, 5 Mar 2017 at 21:58 Dzmitry Bahdanau <dimabgv at gmail.com> wrote:
>
>> Hi all,
>>
>> This Tuesday (March 7) we will have an extra tea-talk by *Martin
>> Arjovsky*. Please note that the time is different (14:30), but the place
>> is the same (AA6214). Hope to see many of you there!
>>
>> *Title: *On Different Distances Between Distributions and Generative
>> Adversarial Networks
>>
>> *Abstract: *Generative adversarial networks (GANs) are notoriously
>> difficult to train. At the core of it, we show that these problems arise
>> naturally when trying to learn distributions whose support lie in low
>> dimensional manifolds. We show how these problems are consequences of
>> trying to optimize the classical divergences (KL, JSD, etc) between our
>> real and data distribution, and that these are symptoms of a more general
>> phenomenon, pointing towards the inefficacy of the usual divergences in
>> certain settings. After that, we bring into play the Wasserstein distance,
>> which we prove doesn't suffer from the same behaviour, and provide a first
>> step on an algorithm that tries to approximately optimize this distance.
>>
>> Dima
>>
>
> _______________________________________________
> Lisa_labo mailing list
> Lisa_labo at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20170307/a0db24f0/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires