[Lisa_seminaires] 3 talks this week

Yoshua Bengio yoshua.umontreal at gmail.com
Dim 14 Fév 21:58:06 EST 2016


And here is the title and abstract from my Friday talk (2:30pm):

Towards bridging the gap between deep learning and biology

We explore the following crucial question: how could brains potentially
perform the kind of powerful credit assignment that allows hidden layers of
a very deep network to be trained and that has been so successful with
backprop in deep nets recently? Global reinforcement learning signals have
too much variance (scaling with the number of neurons or synapses) to be
credible from a machine learning point of view. Concerns have been raised
about how something like back-propagation could be implemented in brains.
We present several intriguing results all aimed at answering this question
and possibly providing pieces of this puzzle. We start with an update rule
that yields updates similar to STDP but that is anchored in quantities such
as pre-synaptic and post-synaptic firing rates and temporal rates of
change. We then show that if neurons are connected symmetrically (with
feedback connections) and define an energy function, (a) their behaviour
corresponds both to inference, i.e., going down the energy, and propagating
error gradients, (b) after a prediction is made on a sensor and an actual
value is observed, the early phases of inference in this network actually
propagate prediction error gradients, and (c) using the above STDP-inspired
rule yields a gradient descent step on prediction error for the fixed point
of the recurrent network (d) contrary to previously believed for such
fixed-point networks, it is not necessary to do a full relaxation in the
positive phase (perturbation propagation does the backprop job). Finally,
we discuss some of the open problems we are facing to move forward, such as
avoiding the negative phase fixed point relaxation (just like we got rid of
the positive phase one), avoiding the forced symmetry of synaptic weights,
the question of learning the full joint distribution and not just a point
prediction, doing unsupervised learning, and handling time.

-- Yoshua


2016-02-13 14:43 GMT-05:00 Yoshua Bengio <yoshua.umontreal at gmail.com>:

> Hi all,
>
> There are three talks to attend this week:
>
> Tuesday 10h30: Mingbin Feng, on reusing outputs of simulation experiments
> Thursday 15h30: Simon Lacoste-Julien, on structured machine learning
> Friday 14h30: Yoshua Bengio, on biologically plausible backprop
>
> Nota-Bene: a TV crew should be around between Wednesday and Friday.
>
> -------
>
>  *Green Simulation: Reusing the Output of Simulation Experiment*
>
> par
>
>
> *Mingbin Feng*
>
>  Northwestern University
>
> *Mardi 16 février, 10:30-11:30*, *Salle 3195*, Pavillon André-Aidenstadt
>
>     Université de Montréal, 2920 Chemin de la Tour
>
> Café avant 10:00-10:30
>  *Résumé:*
>
> In finance and insurance, simulations are often run repeatedly with different inputs. For example, in a simulation for risk management, the simulation model for valuing derivative securities is run many times with different macroeconomic conditions. We present a new concept of green simulation, which seeks to increase the computational efficiency of the current experiment by reusing simulation output generated during previous experiments. Green simulation views simulation output as scarce resource and turns the computational expense in an experiment into computation investment for future ones. We propose and examine two green simulation estimators for repeated experiments whose inputs are observations from an underlying stochastic process. Two types of convergence are shown for these green simulation estimators under different ass
> umptions. As illustrated by two practical applications: catastrophe bond pricing and periodic credit risk evaluation, green simulation is both theoretically sound and practically useful.
>
>
> ---------- Forwarded message ----------
> From: Neil Stewart <stewart at iro.umontreal.ca>
> Date: 2016-02-13 13:22 GMT-05:00
> Subject: Colloque du DIRO, jeudi le 18 février, 2016. Conférencier: Simon
> Lacoste-Julien
> To: seminaires at iro.umontreal.ca
>
>
>  *Modern Optimization for Structured Machine Learning *
>
> par
>
>
> *Simon Lacoste-Julien*
>
> INRIA
>
> *Jeudi 18 février, 15:30-16:30*, *Salle 3195*, Pavillon André-Aidenstadt
>
>     Université de Montréal, 2920 Chemin de la Tour
>
> Café avant 15:00-15:30
>
>
> *Résumé:*
>
>
>
> Machine learning has grown significantly in the last two decades and have had impact in diverse areas such as computer vision, natural language processing, computational biology and social sciences. These new applications have made apparent though that real world data have a richer structure than have been modeled by some of the classical paradigms of machine learning such as binary classification and regression. In machine translation for example, the algorithm needs to choose amongst an exponential number of possible sequences of words as translations, and not just a few options as in handwritten digits recognition. A key challenge in modern machine learning is to find ways to model this complex structure in a scalable manner which is still robust to model misspecification. In this talk, I will present such a method that can exploit the combinatorial structure in data represented by graphs, with various applications such as the task of word alignment in natural language processing, the alignment of large knowledge bases for the Semantic Web or the tracking of multiple objects in video. I will also present how these problems have motivated progress on novel optimization techniques including improvements on the venerable Frank-Wolfe optimization algorithm (1956) or Robbins-Monroe stochastic gradient method (1951). These examples will highlight how the rich two-way street between optimization and machine learning enables us to exploit more effectively the structure of complex data.
>
>
>
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20160214/a91958b4/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires