[Lisa_seminaires] Talk Fri. 19th: Towards bridging the gap between deep learning and biology

Jörg Bornschein bornj at iro.umontreal.ca
Jeu 18 Fév 16:49:13 EST 2016


We will also stream and record this talk. If you can't make it in person
you can thus join via

 https://plus.google.com/u/0/events/cta17i5fkbvblgg06ar7vbdtkf8 or
 http://www.youtube.com/watch?v=lKVIXI8Djv4


best,

   Jorg



> this weeks tea talk will be a bit special -- Yoshua Bengio will talk
> about  bridging the gap between deep learning and biology; and we will have
> a Dutch TV crew around to film how "science is done".
>
> I expect it will be very crowded -- so maybe come a few minutes earlier.
>
> --
> Title: Towards bridging the gap between deep learning and biology
> Who: Yoshua Bengio
> When: 14:30
> Where: AA 3195
>
> We explore the following crucial question: how could brains potentially
> perform the kind of powerful credit assignment that allows hidden layers of
> a very deep network to be trained and that has been so successful with
> backprop in deep nets recently? Global reinforcement learning signals have
> too much variance (scaling with the number of neurons or synapses) to be
> credible from a machine learning point of view. Concerns have been raised
> about how something like back-propagation could be implemented in brains.
> We present several intriguing results all aimed at answering this question
> and possibly providing pieces of this puzzle. We start with an update rule
> that yields updates similar to STDP but that is anchored in quantities such
> as pre-synaptic and post-synaptic firing rates and temporal rates of
> change. We then show that if neurons are connected symmetrically (with
> feedback connections) and define an energy function, (a) their behaviour
> corresponds both to inference, i.e., going down the energy, and propagating
> error gradients, (b) after a prediction is made on a sensor and an actual
> value is observed, the early phases of inference in this network actually
> propagate prediction error gradients, and (c) using the above STDP-inspired
> rule yields a gradient descent step on prediction error for the fixed point
> of the recurrent network (d) contrary to previously believed for such
> fixed-point networks, it is not necessary to do a full relaxation in the
> positive phase (perturbation propagation does the backprop job). Finally,
> we discuss some of the open problems we are facing to move forward, such as
> avoiding the negative phase fixed point relaxation (just like we got rid of
> the positive phase one), avoiding the forced symmetry of synaptic weights,
> the question of learning the full joint distribution and not just a point
> prediction, doing unsupervised learning, and handling time.
> --
>
> Best and looking forward to see you there
>
>    j
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20160218/0fd41aea/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires