Dear all,
I have some last minute changes to the schedule of tea talks this week:
(1) Colin Devin's Tea Talk on Wednesday at 13.00 (AA3195) (2) Yoshua Bengio's Tea Talk on Friday at 13.00 (AA3195)
The first one is the already announced one, and there is no change. The second one has been scheduled this morning, and Yoshua will tell us about his new idea on a new learning criterion for deep neural networks.
I'm attaching the abstract by Yoshua at the end of this email.
Best, - Cho
===
Speaker: Prof. Yoshua Bengio (University of Montreal) Title: How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation Abstract: We discuss the usefulness and rationality of optimism in general sequential decision making. Humans are often optimistically biased and optimists are achieving more of their ambitions than more rational people. We provide a mathematical analysis of general optimistic agents and identify how they can be better or worse than strictly rational agents. These agents select the most optimistic among the still plausible hypotheses from a class. Further, we discuss a milder form of optimism in the case of continuously parameterized classes. We refer to this setting as reward-modulated inference, a framework that includes recent models for synaptic weight updates in neuroscience as a special case. At the center is a trade-off between assigning high probability to likable or likely events. A nice consequence is that we can formulate generative and discriminative learning as endpoints of a continuum. Finally, I present some first experiments using autoencoders for the classical MNIST task showing a very simple way of getting ok but not state-of-the art accuracy. The aim is not to optimize narrow tasks maximally but to be able to achieve the law of effect, making choices with a frequency proportional to past rewards in situations deemed similar. In really complex tasks this can be a sufficient objective and one that humans and animals often settle for. Also, the strategy provides suitable exploration as well as robustness to change.
Afficher les réponses par date
I'm sorry about it, but the abstract attached at the last email is a wrong one. Please, see below for the correct one.
Best, - Cho
=== Speaker: Prof. Yoshua Bengio (University of Montreal) Title: How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation Abstract: In this paper we propose to exploit reconstruction as a layer-local training signal for deep learning, be it generative or discriminant, single or multi-modal, supervised, semi-supervised or unsupervised, feedforward or recurrent. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on back-propagation in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. For this to work, each layer must be a good denoising or regularized auto-encoder itself. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally.
On Tue, Jul 29, 2014 at 2:57 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
I have some last minute changes to the schedule of tea talks this week:
(1) Colin Devin's Tea Talk on Wednesday at 13.00 (AA3195) (2) Yoshua Bengio's Tea Talk on Friday at 13.00 (AA3195)
The first one is the already announced one, and there is no change. The second one has been scheduled this morning, and Yoshua will tell us about his new idea on a new learning criterion for deep neural networks.
I'm attaching the abstract by Yoshua at the end of this email.
Best,
- Cho
===
Speaker: Prof. Yoshua Bengio (University of Montreal) Title: How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation Abstract: We discuss the usefulness and rationality of optimism in general sequential decision making. Humans are often optimistically biased and optimists are achieving more of their ambitions than more rational people. We provide a mathematical analysis of general optimistic agents and identify how they can be better or worse than strictly rational agents. These agents select the most optimistic among the still plausible hypotheses from a class. Further, we discuss a milder form of optimism in the case of continuously parameterized classes. We refer to this setting as reward-modulated inference, a framework that includes recent models for synaptic weight updates in neuroscience as a special case. At the center is a trade-off between assigning high probability to likable or likely events. A nice consequence is that we can formulate generative and discriminative learning as endpoints of a continuum. Finally, I present some first experiments using autoencoders for the classical MNIST task showing a very simple way of getting ok but not state-of-the art accuracy. The aim is not to optimize narrow tasks maximally but to be able to achieve the law of effect, making choices with a frequency proportional to past rewards in situations deemed similar. In really complex tasks this can be a sufficient objective and one that humans and animals often settle for. Also, the strategy provides suitable exploration as well as robustness to change.