Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there! - Cho
=== - Speaker: Prof. Yoshua Bengio (University of Montreal) - Date and Time: 1 Aug 2014 @13.00 - Place: AA3195 - Title: How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation - Abstract: In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Afficher les réponses par date
Please ignore the attached pdf, it is a very old version. The arxiv version is much better, with many mistakes fixed:
http://arxiv.org/abs/1407.7906
In the future, of course, the svn version will always be the latest one (articles/2014/targetprop).
-- Yoshua
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
I cannot make it but I would be very interested in seeing this tea-talk. If others are in the same situation, perhaps we could look into recording this tea-talk like we did with guillaume's. I do not know if we have the material to do this in the lab, though.
Pierre Luc
2014-07-31 17:11 GMT-04:00 Yoshua Bengio yoshua.bengio@gmail.com:
Please ignore the attached pdf, it is a very old version. The arxiv version is much better, with many mistakes fixed:
http://arxiv.org/abs/1407.7906
In the future, of course, the svn version will always be the latest one (articles/2014/targetprop).
-- Yoshua
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
That's a nice idea. I would also be benefited a lot.
Kyoung-Gu 2014. 8. 1. 오전 9:37에 "Pierre Luc Carrier" carrier.pierreluc@gmail.com님이 작성:
I cannot make it but I would be very interested in seeing this tea-talk. If others are in the same situation, perhaps we could look into recording this tea-talk like we did with guillaume's. I do not know if we have the material to do this in the lab, though.
Pierre Luc
2014-07-31 17:11 GMT-04:00 Yoshua Bengio yoshua.bengio@gmail.com:
Please ignore the attached pdf, it is a very old version. The arxiv version is much better, with many mistakes fixed:
http://arxiv.org/abs/1407.7906
In the future, of course, the svn version will always be the latest one (articles/2014/targetprop).
-- Yoshua
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Unfortunately, I don't have anything to record the talk with. Is there anyone else at the lab who has happened to bring their camcorder or camera that can record the talk?
On Fri, Aug 1, 2014 at 9:51 AM, KyoungGu Woo epigramwoo@gmail.com wrote:
That's a nice idea. I would also be benefited a lot.
Kyoung-Gu 2014. 8. 1. 오전 9:37에 "Pierre Luc Carrier" carrier.pierreluc@gmail.com님이 작성:
I cannot make it but I would be very interested in seeing this tea-talk.
If others are in the same situation, perhaps we could look into recording this tea-talk like we did with guillaume's. I do not know if we have the material to do this in the lab, though.
Pierre Luc
2014-07-31 17:11 GMT-04:00 Yoshua Bengio yoshua.bengio@gmail.com:
Please ignore the attached pdf, it is a very old version. The arxiv version is much better, with many mistakes fixed:
http://arxiv.org/abs/1407.7906
In the future, of course, the svn version will always be the latest one (articles/2014/targetprop).
-- Yoshua
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Tapani will record the talk with his laptop. It's not going to of super quality, but hopefully will be good enough to hear and see the talk.
On Fri, Aug 1, 2014 at 10:06 AM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Unfortunately, I don't have anything to record the talk with. Is there anyone else at the lab who has happened to bring their camcorder or camera that can record the talk?
On Fri, Aug 1, 2014 at 9:51 AM, KyoungGu Woo epigramwoo@gmail.com wrote:
That's a nice idea. I would also be benefited a lot.
Kyoung-Gu 2014. 8. 1. 오전 9:37에 "Pierre Luc Carrier" carrier.pierreluc@gmail.com님이 작성:
I cannot make it but I would be very interested in seeing this
tea-talk. If others are in the same situation, perhaps we could look into recording this tea-talk like we did with guillaume's. I do not know if we have the material to do this in the lab, though.
Pierre Luc
2014-07-31 17:11 GMT-04:00 Yoshua Bengio yoshua.bengio@gmail.com:
Please ignore the attached pdf, it is a very old version. The arxiv version is much better, with many mistakes fixed:
http://arxiv.org/abs/1407.7906
In the future, of course, the svn version will always be the latest one (articles/2014/targetprop).
-- Yoshua
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Some phones have a pretty good camera. The only problem is that the default memory card is barely enough for 1 hour of high-quality video, assuming you don't have other things on the phone that take Gigabytes of space. I probably have enough space for about half an hour on my phone.
-- Yoshua
On Fri, Aug 1, 2014 at 11:24 AM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Tapani will record the talk with his laptop. It's not going to of super quality, but hopefully will be good enough to hear and see the talk.
On Fri, Aug 1, 2014 at 10:06 AM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Unfortunately, I don't have anything to record the talk with. Is there anyone else at the lab who has happened to bring their camcorder or camera that can record the talk?
On Fri, Aug 1, 2014 at 9:51 AM, KyoungGu Woo epigramwoo@gmail.com wrote:
That's a nice idea. I would also be benefited a lot.
Kyoung-Gu 2014. 8. 1. 오전 9:37에 "Pierre Luc Carrier" carrier.pierreluc@gmail.com님이 작성:
I cannot make it but I would be very interested in seeing this
tea-talk. If others are in the same situation, perhaps we could look into recording this tea-talk like we did with guillaume's. I do not know if we have the material to do this in the lab, though.
Pierre Luc
2014-07-31 17:11 GMT-04:00 Yoshua Bengio yoshua.bengio@gmail.com:
Please ignore the attached pdf, it is a very old version. The arxiv version is much better, with many mistakes fixed:
http://arxiv.org/abs/1407.7906
In the future, of course, the svn version will always be the latest one (articles/2014/targetprop).
-- Yoshua
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho <cho.k.hyun@gmail.com
wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_teatalk mailing list Lisa_teatalk@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_teatalk
+1
I cannot make it but I would be very interested in seeing this tea-talk. If others are in the same situation, perhaps we could look into recording this tea-talk like we did with guillaume's. I do not know if we have the material to do this in the lab, though.
Pierre Luc
2014-07-31 17:11 GMT-04:00 Yoshua Bengio yoshua.bengio@gmail.com:
Please ignore the attached pdf, it is a very old version. The arxiv version is much better, with many mistakes fixed:
http://arxiv.org/abs/1407.7906
In the future, of course, the svn version will always be the latest one (articles/2014/targetprop).
-- Yoshua
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Dear all,
You can find the recorded copy of Yoshua's talk today at /data/lisatmp3/chokyun/teatalks/yoshua_aug_2014/target_propagation.mp4. The talk was recorded by Tapani using his laptop.
Best, - Cho
On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We will have a tea talk this Friday by Prof. Yoshua Bengio. See below for the details and the attached paper.
Hope to see many of you there!
- Cho
===
- Speaker: Prof. Yoshua Bengio (University of Montreal)
- Date and Time: 1 Aug 2014 @13.00
- Place: AA3195
- Title: How Auto-Encoders Could Provide Credit Assignment in Deep
Networks via Target Propagation
- Abstract:
In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.