[Lisa_teatalk] [Lisa_labo] Tea Talk 1 Aug Friday @13.00 AA3195 by Yoshua Bengio

Kyung Hyun Cho cho.k.hyun at gmail.com
Fri Aug 1 11:24:57 EDT 2014


Tapani will record the talk with his laptop. It's not going to of super
quality, but hopefully will be good enough to hear and see the talk.


On Fri, Aug 1, 2014 at 10:06 AM, Kyung Hyun Cho <cho.k.hyun at gmail.com>
wrote:

> Unfortunately, I don't have anything to record the talk with. Is there
> anyone else at the lab who has happened to bring their camcorder or camera
> that can record the talk?
>
>
> On Fri, Aug 1, 2014 at 9:51 AM, KyoungGu Woo <epigramwoo at gmail.com> wrote:
>
>> That's a nice idea.
>> I would also be benefited a lot.
>>
>> Kyoung-Gu
>> 2014. 8. 1. 오전 9:37에 "Pierre Luc Carrier" <carrier.pierreluc at gmail.com>님이
>> 작성:
>>
>>  I cannot make it but I would be very interested in seeing this
>>> tea-talk. If others are in the same situation, perhaps we could look into
>>> recording this tea-talk like we did with guillaume's. I do not know if we
>>> have the material to do this in the lab, though.
>>>
>>> Pierre Luc
>>>
>>>
>>> 2014-07-31 17:11 GMT-04:00 Yoshua Bengio <yoshua.bengio at gmail.com>:
>>>
>>>> Please ignore the attached pdf, it is a very old version. The arxiv
>>>> version is much better, with many mistakes fixed:
>>>>
>>>>    http://arxiv.org/abs/1407.7906
>>>>
>>>> In the future, of course, the svn version will always be the latest one
>>>> (articles/2014/targetprop).
>>>>
>>>> -- Yoshua
>>>>
>>>>
>>>>  On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho <cho.k.hyun at gmail.com>
>>>> wrote:
>>>>
>>>>>  Dear all,
>>>>>
>>>>> We will have a tea talk this Friday by Prof. Yoshua Bengio. See below
>>>>> for the details and the attached paper.
>>>>>
>>>>> Hope to see many of you there!
>>>>> - Cho
>>>>>
>>>>> ===
>>>>> - Speaker: Prof. Yoshua Bengio (University of Montreal)
>>>>> - Date and Time: 1 Aug 2014 @13.00
>>>>> - Place: AA3195
>>>>> - Title: How Auto-Encoders Could Provide Credit Assignment in Deep
>>>>> Networks via Target Propagation
>>>>> - Abstract:
>>>>> In this paper we propose to exploit reconstruction as a layer-local
>>>>> training signal for deep learning, be it generative or discriminant, single
>>>>> or multi-modal, supervised, semi-supervised or unsupervised, feedforward or
>>>>> recurrent. Reconstructions can be propagated in a form of target
>>>>> propagation playing a role similar to back-propagation but helping to
>>>>> reduce the reliance on back-propagation in order to perform credit
>>>>> assignment across many levels of possibly strong non-linearities (which is
>>>>> difficult for back-propagation). A regularized auto-encoder tends produce a
>>>>> reconstruction that is a more likely version of its input, i.e., a small
>>>>> move in the direction of higher likelihood. By generalizing gradients,
>>>>> target propagation may also allow to train deep networks with discrete
>>>>> hidden units. If the auto-encoder takes both a representation of input and
>>>>> target (or of any side information) in input, then its reconstruction of
>>>>> input representation provides a target towards a representation that is
>>>>> more likely, conditioned on all the side information. A deep auto-encoder
>>>>> decoding path generalizes gradient propagation in a learned way that can
>>>>> thus handle not just infinitesimal changes but larger, discrete changes,
>>>>> hopefully allowing credit assignment through a long chain of non-linear
>>>>> operations. For this to work, each layer must be a good denoising or
>>>>> regularized auto-encoder itself. In addition to each layer being a good
>>>>> auto-encoder, the encoder also learns to please the upper layers by
>>>>> transforming the data into a space where it is easier to model by them,
>>>>> flattening manifolds and disentangling factors. The motivations and
>>>>> theoretical justifications for this approach are laid down in this paper,
>>>>> along with conjectures that will have to be verified either mathematically
>>>>> or experimentally.
>>>>>
>>>>> _______________________________________________
>>>>> Lisa_labo mailing list
>>>>> Lisa_labo at iro.umontreal.ca
>>>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Lisa_labo mailing list
>>>> Lisa_labo at iro.umontreal.ca
>>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Lisa_labo mailing list
>>> Lisa_labo at iro.umontreal.ca
>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>>>
>>>
>> _______________________________________________
>> Lisa_labo mailing list
>> Lisa_labo at iro.umontreal.ca
>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140801/c1722337/attachment-0001.html 


More information about the Lisa_teatalk mailing list