[Lisa_seminaires] [Lisa_labo] REMINDER: Talk Fri. 19th, 14:30: Towards bridging the gap between deep learning and biology

xavier bouthillier xavier.bouthillier at gmail.com
Ven 19 Fév 15:41:46 EST 2016


Thank you so much Jörg for the streaming, that was very appreciated!

On Fri, Feb 19, 2016 at 12:35 PM, Jörg Bornschein <bornj at iro.umontreal.ca>
wrote:

> Hi,
>
> just a friendly reminder:
>
> This week Yoshua Bengio will talk about  bridging the gap between deep
> learning and biology. We will also try something new and stream the talk at
>
>  https://plus.google.com/u/0/events/cta17i5fkbvblgg06ar7vbdtkf8
>  http://www.youtube.com/watch?v=lKVIXI8Djv4
>
>
> --
> Title: Towards bridging the gap between deep learning and biology
> Who: Yoshua Bengio
> When:  Fri, 19th, 14:30
> Where: Pavillon André-Aisenstadt , 3rd floor, AA 3195
>
> We explore the following crucial question: how could brains potentially
> perform the kind of powerful credit assignment that allows hidden layers of
> a very deep network to be trained and that has been so successful with
> backprop in deep nets recently? Global reinforcement learning signals have
> too much variance (scaling with the number of neurons or synapses) to be
> credible from a machine learning point of view. Concerns have been raised
> about how something like back-propagation could be implemented in brains.
> We present several intriguing results all aimed at answering this question
> and possibly providing pieces of this puzzle. We start with an update rule
> that yields updates similar to STDP but that is anchored in quantities such
> as pre-synaptic and post-synaptic firing rates and temporal rates of
> change. We then show that if neurons are connected symmetrically (with
> feedback connections) and define an energy function, (a) their behaviour
> corresponds both to inference, i.e., going down the energy, and propagating
> error gradients, (b) after a prediction is made on a sensor and an actual
> value is observed, the early phases of inference in this network actually
> propagate prediction error gradients, and (c) using the above STDP-inspired
> rule yields a gradient descent step on prediction error for the fixed point
> of the recurrent network (d) contrary to previously believed for such
> fixed-point networks, it is not necessary to do a full relaxation in the
> positive phase (perturbation propagation does the backprop job). Finally,
> we discuss some of the open problems we are facing to move forward, such as
> avoiding the negative phase fixed point relaxation (just like we got rid of
> the positive phase one), avoiding the forced symmetry of synaptic weights,
> the question of learning the full joint distribution and not just a point
> prediction, doing unsupervised learning, and handling time.
> --
>
> Best and looking forward to see you there
>
>    j
>
> _______________________________________________
> Lisa_labo mailing list
> Lisa_labo at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20160219/d961d711/attachment-0001.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires