[Lisa_teatalk] Tea Talk 1 Aug Friday @13.00 AA3195 by Yoshua Bengio

Kyung Hyun Cho cho.k.hyun at gmail.com
Thu Jul 31 16:49:43 EDT 2014


Dear all,

We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for
the details and the attached paper.

Hope to see many of you there!
- Cho

===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep Networks
via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local
training signal for deep learning, be it generative or discriminant, single
or multi-modal, supervised, semi-supervised or unsupervised, feedforward or
recurrent. Reconstructions can be propagated in a form of target
propagation playing a role similar to back-propagation but helping to
reduce the reliance on back-propagation in order to perform credit
assignment across many levels of possibly strong non-linearities (which is
difficult for back-propagation). A regularized auto-encoder tends produce a
reconstruction that is a more likely version of its input, i.e., a small
move in the direction of higher likelihood. By generalizing gradients,
target propagation may also allow to train deep networks with discrete
hidden units. If the auto-encoder takes both a representation of input and
target (or of any side information) in input, then its reconstruction of
input representation provides a target towards a representation that is
more likely, conditioned on all the side information. A deep auto-encoder
decoding path generalizes gradient propagation in a learned way that can
thus handle not just infinitesimal changes but larger, discrete changes,
hopefully allowing credit assignment through a long chain of non-linear
operations. For this to work, each layer must be a good denoising or
regularized auto-encoder itself. In addition to each layer being a good
auto-encoder, the encoder also learns to please the upper layers by
transforming the data into a space where it is easier to model by them,
flattening manifolds and disentangling factors. The motivations and
theoretical justifications for this approach are laid down in this paper,
along with conjectures that will have to be verified either mathematically
or experimentally.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140731/f724180d/attachment-0001.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: targetprop.pdf
Type: application/pdf
Size: 304509 bytes
Desc: not available
Url : http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140731/f724180d/attachment-0001.pdf 


More information about the Lisa_teatalk mailing list