[Lisa_teatalk] [Lisa_labo] Tea Talk 16 June Monday @13.00 AA3195 by Ian Goodfellow

Kyung Hyun Cho cho.k.hyun at gmail.com
Thu Jun 12 13:21:29 EDT 2014


Dustin, thanks for volunteering!

I've put the paper in the list of tea talk papers (
https://docs.google.com/spreadsheets/d/1CdCx2P4QXrU3byj5p3RuFw06qTLEIcchhniXHraBtnI/edit?usp=sharing).
If there's no volunteer in a couple of weeks (during the tea-talk break), I
will schedule Dustin in the next slot.





On Thu, Jun 12, 2014 at 1:16 PM, Dustin Webb <u0625930 at utah.edu> wrote:

> I would like to give a talk but I don't have a topic per se. I encountered
> a paper that attempts to do deep learning with kernel methods (
> cseweb.ucsd.edu/~yoc002/paper/nips09_arccos.pdf) if no one can offer a
> better option.
>
>
> On Thu, Jun 12, 2014 at 11:31 AM, Kyung Hyun Cho <cho.k.hyun at gmail.com>
> wrote:
>
>> Dear all,
>>
>> Ian Goodfellow will guide us to the wonderful world of adversarial
>> sampling away from the hell of MCMC and iterative inference at the tea talk
>> on Monday (http://arxiv-web3.library.cornell.edu/abs/1406.2661, the
>> paper attached)
>>
>> Hope to see many of you there!
>> - Cho
>>
>> P.S. I am looking for volunteers who want to share their ideas or discuss
>> interesting papers at a tea talk! See
>> https://docs.google.com/spreadsheets/d/1_bbHxcm4r-rs63chcKfHuBsM3roHuud4CJJjCEqQfII/edit#gid=0
>>
>> ========
>>
>> - Speaker: Ian Goodfellow
>> - Date and Time: 16 June 2014 @13.00
>> - Place: AA3195
>> - Abstract:
>>
>> We propose a new framework for estimating generative models via an
>> adversarial process, in which we simultaneously train two models: a
>> generative model G that captures the data distribution, and a
>> discriminative model D that estimates the probability that a sample came
>> from the training data rather than G. The training procedure for G is to
>> maximize the probability of D making a mistake. This framework corresponds
>> to a minimax two-player game. In the space of arbitrary functions G and D,
>> a unique solution exists, with G recovering the training data distribution
>> and D equal to 1/2 everywhere. In the case where G and D are defined by
>> multilayer perceptrons, the entire system can be trained with
>> backpropagation. There is no need for any Markov chains or unrolled
>> approximate inference networks during either training or generation of
>> samples. Experiments demonstrate the potential of the framework through
>> qualitative and quantitative evaluation of the generated samples.
>>
>>
>>
>> _______________________________________________
>> Lisa_labo mailing list
>> Lisa_labo at iro.umontreal.ca
>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>>
>>
>
>
> --
> Dustin Webb <http://www.cs.utah.edu/~dustin>
> Algorithmic Robotics Lab <http://arl.cs.utah.edu/>
> Laboratoire d'Informatique des Systemes Adaptatifs
> <http://lisa.iro.umontreal.ca/>
> Defining Intelligence <http://daemonmaker.blogspot.com>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140612/3be50699/attachment.html 


More information about the Lisa_teatalk mailing list