Hi everyone,
this weeks tea talk will be a bit special -- Yoshua Bengio will talk about bridging the gap between deep learning and biology; and we will have a Dutch TV crew around to film how "science is done".
I expect it will be very crowded -- so maybe come a few minutes earlier.
-- Title: Towards bridging the gap between deep learning and biology Who: Yoshua Bengio When: 14:30 Where: AA 3195
We explore the following crucial question: how could brains potentially perform the kind of powerful credit assignment that allows hidden layers of a very deep network to be trained and that has been so successful with backprop in deep nets recently? Global reinforcement learning signals have too much variance (scaling with the number of neurons or synapses) to be credible from a machine learning point of view. Concerns have been raised about how something like back-propagation could be implemented in brains. We present several intriguing results all aimed at answering this question and possibly providing pieces of this puzzle. We start with an update rule that yields updates similar to STDP but that is anchored in quantities such as pre-synaptic and post-synaptic firing rates and temporal rates of change. We then show that if neurons are connected symmetrically (with feedback connections) and define an energy function, (a) their behaviour corresponds both to inference, i.e., going down the energy, and propagating error gradients, (b) after a prediction is made on a sensor and an actual value is observed, the early phases of inference in this network actually propagate prediction error gradients, and (c) using the above STDP-inspired rule yields a gradient descent step on prediction error for the fixed point of the recurrent network (d) contrary to previously believed for such fixed-point networks, it is not necessary to do a full relaxation in the positive phase (perturbation propagation does the backprop job). Finally, we discuss some of the open problems we are facing to move forward, such as avoiding the negative phase fixed point relaxation (just like we got rid of the positive phase one), avoiding the forced symmetry of synaptic weights, the question of learning the full joint distribution and not just a point prediction, doing unsupervised learning, and handling time. --
Best and looking forward to see you there
j
Afficher les réponses par date
We will also stream and record this talk. If you can't make it in person you can thus join via
https://plus.google.com/u/0/events/cta17i5fkbvblgg06ar7vbdtkf8 or http://www.youtube.com/watch?v=lKVIXI8Djv4
best,
Jorg
this weeks tea talk will be a bit special -- Yoshua Bengio will talk about bridging the gap between deep learning and biology; and we will have a Dutch TV crew around to film how "science is done".
I expect it will be very crowded -- so maybe come a few minutes earlier.
-- Title: Towards bridging the gap between deep learning and biology Who: Yoshua Bengio When: 14:30 Where: AA 3195
We explore the following crucial question: how could brains potentially perform the kind of powerful credit assignment that allows hidden layers of a very deep network to be trained and that has been so successful with backprop in deep nets recently? Global reinforcement learning signals have too much variance (scaling with the number of neurons or synapses) to be credible from a machine learning point of view. Concerns have been raised about how something like back-propagation could be implemented in brains. We present several intriguing results all aimed at answering this question and possibly providing pieces of this puzzle. We start with an update rule that yields updates similar to STDP but that is anchored in quantities such as pre-synaptic and post-synaptic firing rates and temporal rates of change. We then show that if neurons are connected symmetrically (with feedback connections) and define an energy function, (a) their behaviour corresponds both to inference, i.e., going down the energy, and propagating error gradients, (b) after a prediction is made on a sensor and an actual value is observed, the early phases of inference in this network actually propagate prediction error gradients, and (c) using the above STDP-inspired rule yields a gradient descent step on prediction error for the fixed point of the recurrent network (d) contrary to previously believed for such fixed-point networks, it is not necessary to do a full relaxation in the positive phase (perturbation propagation does the backprop job). Finally, we discuss some of the open problems we are facing to move forward, such as avoiding the negative phase fixed point relaxation (just like we got rid of the positive phase one), avoiding the forced symmetry of synaptic weights, the question of learning the full joint distribution and not just a point prediction, doing unsupervised learning, and handling time. --
Best and looking forward to see you there
j
Hi,
just a friendly reminder:
This week Yoshua Bengio will talk about bridging the gap between deep learning and biology. We will also try something new and stream the talk at
https://plus.google.com/u/0/events/cta17i5fkbvblgg06ar7vbdtkf8 http://www.youtube.com/watch?v=lKVIXI8Djv4
-- Title: Towards bridging the gap between deep learning and biology Who: Yoshua Bengio When: Fri, 19th, 14:30 Where: Pavillon André-Aisenstadt , 3rd floor, AA 3195
We explore the following crucial question: how could brains potentially perform the kind of powerful credit assignment that allows hidden layers of a very deep network to be trained and that has been so successful with backprop in deep nets recently? Global reinforcement learning signals have too much variance (scaling with the number of neurons or synapses) to be credible from a machine learning point of view. Concerns have been raised about how something like back-propagation could be implemented in brains. We present several intriguing results all aimed at answering this question and possibly providing pieces of this puzzle. We start with an update rule that yields updates similar to STDP but that is anchored in quantities such as pre-synaptic and post-synaptic firing rates and temporal rates of change. We then show that if neurons are connected symmetrically (with feedback connections) and define an energy function, (a) their behaviour corresponds both to inference, i.e., going down the energy, and propagating error gradients, (b) after a prediction is made on a sensor and an actual value is observed, the early phases of inference in this network actually propagate prediction error gradients, and (c) using the above STDP-inspired rule yields a gradient descent step on prediction error for the fixed point of the recurrent network (d) contrary to previously believed for such fixed-point networks, it is not necessary to do a full relaxation in the positive phase (perturbation propagation does the backprop job). Finally, we discuss some of the open problems we are facing to move forward, such as avoiding the negative phase fixed point relaxation (just like we got rid of the positive phase one), avoiding the forced symmetry of synaptic weights, the question of learning the full joint distribution and not just a point prediction, doing unsupervised learning, and handling time. --
Best and looking forward to see you there
j
FYI, I shared this live streaming with a few people here in NYU, and they may join via Hangout. I hope it's okay with you guys and specifically with Yoshua. - K
On Fri, Feb 19, 2016 at 12:35 PM, Jörg Bornschein bornj@iro.umontreal.ca wrote:
Hi,
just a friendly reminder:
This week Yoshua Bengio will talk about bridging the gap between deep learning and biology. We will also try something new and stream the talk at
https://plus.google.com/u/0/events/cta17i5fkbvblgg06ar7vbdtkf8 http://www.youtube.com/watch?v=lKVIXI8Djv4
-- Title: Towards bridging the gap between deep learning and biology Who: Yoshua Bengio When: Fri, 19th, 14:30 Where: Pavillon André-Aisenstadt , 3rd floor, AA 3195
We explore the following crucial question: how could brains potentially perform the kind of powerful credit assignment that allows hidden layers of a very deep network to be trained and that has been so successful with backprop in deep nets recently? Global reinforcement learning signals have too much variance (scaling with the number of neurons or synapses) to be credible from a machine learning point of view. Concerns have been raised about how something like back-propagation could be implemented in brains. We present several intriguing results all aimed at answering this question and possibly providing pieces of this puzzle. We start with an update rule that yields updates similar to STDP but that is anchored in quantities such as pre-synaptic and post-synaptic firing rates and temporal rates of change. We then show that if neurons are connected symmetrically (with feedback connections) and define an energy function, (a) their behaviour corresponds both to inference, i.e., going down the energy, and propagating error gradients, (b) after a prediction is made on a sensor and an actual value is observed, the early phases of inference in this network actually propagate prediction error gradients, and (c) using the above STDP-inspired rule yields a gradient descent step on prediction error for the fixed point of the recurrent network (d) contrary to previously believed for such fixed-point networks, it is not necessary to do a full relaxation in the positive phase (perturbation propagation does the backprop job). Finally, we discuss some of the open problems we are facing to move forward, such as avoiding the negative phase fixed point relaxation (just like we got rid of the positive phase one), avoiding the forced symmetry of synaptic weights, the question of learning the full joint distribution and not just a point prediction, doing unsupervised learning, and handling time. --
Best and looking forward to see you there
j
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Thank you so much Jörg for the streaming, that was very appreciated!
On Fri, Feb 19, 2016 at 12:35 PM, Jörg Bornschein bornj@iro.umontreal.ca wrote:
Hi,
just a friendly reminder:
This week Yoshua Bengio will talk about bridging the gap between deep learning and biology. We will also try something new and stream the talk at
https://plus.google.com/u/0/events/cta17i5fkbvblgg06ar7vbdtkf8 http://www.youtube.com/watch?v=lKVIXI8Djv4
-- Title: Towards bridging the gap between deep learning and biology Who: Yoshua Bengio When: Fri, 19th, 14:30 Where: Pavillon André-Aisenstadt , 3rd floor, AA 3195
We explore the following crucial question: how could brains potentially perform the kind of powerful credit assignment that allows hidden layers of a very deep network to be trained and that has been so successful with backprop in deep nets recently? Global reinforcement learning signals have too much variance (scaling with the number of neurons or synapses) to be credible from a machine learning point of view. Concerns have been raised about how something like back-propagation could be implemented in brains. We present several intriguing results all aimed at answering this question and possibly providing pieces of this puzzle. We start with an update rule that yields updates similar to STDP but that is anchored in quantities such as pre-synaptic and post-synaptic firing rates and temporal rates of change. We then show that if neurons are connected symmetrically (with feedback connections) and define an energy function, (a) their behaviour corresponds both to inference, i.e., going down the energy, and propagating error gradients, (b) after a prediction is made on a sensor and an actual value is observed, the early phases of inference in this network actually propagate prediction error gradients, and (c) using the above STDP-inspired rule yields a gradient descent step on prediction error for the fixed point of the recurrent network (d) contrary to previously believed for such fixed-point networks, it is not necessary to do a full relaxation in the positive phase (perturbation propagation does the backprop job). Finally, we discuss some of the open problems we are facing to move forward, such as avoiding the negative phase fixed point relaxation (just like we got rid of the positive phase one), avoiding the forced symmetry of synaptic weights, the question of learning the full joint distribution and not just a point prediction, doing unsupervised learning, and handling time. --
Best and looking forward to see you there
j
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
lisa_seminaires@iro.umontreal.ca