[Lisa_seminaires] [DIRO Talk] Devon Hjelm (MILA) Fri Mar 2 10:30AM AA1360

Michael Noukhovitch mnoukhov at gmail.com
Ven 2 Mar 10:21:49 EST 2018


Reminder: this is in 10 minutes!

On Mon, Feb 26, 2018 at 2:03 PM Michael Noukhovitch <mnoukhov at gmail.com>
wrote:

> For our last special DIRO talk, we have our very own *Devon Hjelm* giving
> a talk on *Friday March 2* at *10:30AM* in room *AA1360*.
>
> This talk has generated a lot of interest, so come take a GANder and judge
> for yourself if it's the real deal!
> Michael
>
> *TITLE *Research in Generative Adversarial Learning
>
> *KEYWORDS*  Unsupervised learning, generative models, representation
> learning, adversarial learning, mutual information
>
> *ABSTRACT*
> Since their inception, generative adversarial networks (GANs) have emerged
> as a state-of-the-art approach for generating high-dimensional continuous
> data. As the field has grown, so has its applications, with the underlying
> principles of GANs having been extended into numerous problems in
> unsupervised learning.
>
> This talk will cover the basic principles of GANs, as well as briefly
> summarize our three recent works that cover different types of unsupervised
> problems using these principles:
>
> 1) Discrete generation in GANs (Boundary seeking GANs, ICLR 2018):
> Training GANs with discrete data (e.g., natural language as realized as a
> sequence of character or word tokens) is normally not possible due to being
> unable to backprop. We introduce a principled approach for training GANs
> with discrete data that draws from likelihood ratio estimation, importance
> sampling, and policy gradients.
>
> 2) Learning richer representations in bidirectional adversarial models
> (GibbsNet, NIPS 2017). Undirected graphical models (e.g., RBMs, DBMs) can
> provide richer representations than those available from directed graphical
> models (e.g., VAEs). We draw from ideas in undirected graphical models to
> formulate an adversarial model that learns a representation that is richer
> than competing adversarial models.
>
> 3) Neural mutual information estimation (MINE, in review): Mutual
> information is notoriously difficult to compute, especially in the
> high-dimensional continuous setting. We introduce a general-purpose neural
> estimator for mutual information which is scalable, flexible, and
> completely trainable via back-prop.
>
>
> *BIO*
> R Devon Hjelm earned his PhD at the University of New Mexico under the
> supervision of Vince Calhoun at the Mind Research Network, a research
> institute dedicated to neuro-diagnostic discovery. Prior to this, he
> acquired a Master’s degree in Physics (with a focus on Quantum Information)
> and Linguistics. He joined the Montréal Institute for Learning Algorithms
> (MILA) at the University of Montréal in January 2017 as an IVADO
> “distinguished researcher” postdoctoral fellow under Yoshua Bengio. There,
> his research focus became adversarial learning (GANs), notably applying
> ideas from GANs to solving a broader class of unsupervised learning
> problems. Ultimately, he is primarily interested in research on training an
> agent that is able to reason about the natural world from evidence and
> communicate its understanding to humans.
>
> *PHOTO*
> *(real sample, not generated)*
>
> *[image: CD2F92BF-5584-473C-BE37-5D616C37B293.png]*
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180302/4460b39e/attachment-0001.html 
-------------- section suivante --------------
Une pièce jointe non texte a été nettoyée...
Nom: CD2F92BF-5584-473C-BE37-5D616C37B293.png
Type: image/png
Taille: 77959 octets
Desc: non disponible
Url: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180302/4460b39e/attachment-0001.png 


Plus d'informations sur la liste de diffusion Lisa_seminaires