[Lisa_teatalk] Tea Talk 1 Aug Friday @13.00 AA3195 by Yoshua Bengio

Kyung Hyun Cho cho.k.hyun at gmail.com
Fri Aug 1 22:59:26 EDT 2014


Dear all,

You can find the recorded copy of Yoshua's talk today at
/data/lisatmp3/chokyun/teatalks/yoshua_aug_2014/target_propagation.mp4. The
talk was recorded by Tapani using his laptop.

Best,
- Cho




On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho <cho.k.hyun at gmail.com>
wrote:

> Dear all,
>
> We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for
> the details and the attached paper.
>
> Hope to see many of you there!
> - Cho
>
> ===
> - Speaker: Prof. Yoshua Bengio (University of Montreal)
> - Date and Time: 1 Aug 2014 @13.00
> - Place: AA3195
> - Title: How Auto-Encoders Could Provide Credit Assignment in Deep
> Networks via Target Propagation
> - Abstract:
> In this paper we propose to exploit reconstruction as a layer-local
> training signal for deep learning, be it generative or discriminant, single
> or multi-modal, supervised, semi-supervised or unsupervised, feedforward or
> recurrent. Reconstructions can be propagated in a form of target
> propagation playing a role similar to back-propagation but helping to
> reduce the reliance on back-propagation in order to perform credit
> assignment across many levels of possibly strong non-linearities (which is
> difficult for back-propagation). A regularized auto-encoder tends produce a
> reconstruction that is a more likely version of its input, i.e., a small
> move in the direction of higher likelihood. By generalizing gradients,
> target propagation may also allow to train deep networks with discrete
> hidden units. If the auto-encoder takes both a representation of input and
> target (or of any side information) in input, then its reconstruction of
> input representation provides a target towards a representation that is
> more likely, conditioned on all the side information. A deep auto-encoder
> decoding path generalizes gradient propagation in a learned way that can
> thus handle not just infinitesimal changes but larger, discrete changes,
> hopefully allowing credit assignment through a long chain of non-linear
> operations. For this to work, each layer must be a good denoising or
> regularized auto-encoder itself. In addition to each layer being a good
> auto-encoder, the encoder also learns to please the upper layers by
> transforming the data into a space where it is easier to model by them,
> flattening manifolds and disentangling factors. The motivations and
> theoretical justifications for this approach are laid down in this paper,
> along with conjectures that will have to be verified either mathematically
> or experimentally.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140801/55d89060/attachment-0001.html 


More information about the Lisa_teatalk mailing list