[Lisa_seminaires] UdeM-McGill-MITACS machine learning seminar Tues, Oct. 20th -

Guillaume Desjardins guillaume.desjardins at gmail.com
Jeu 15 Oct 12:14:18 EDT 2009


Yoshua Bengio will be giving a talk next Tuesday here at the
Université de Montréal, entitled "On Training Deep Neural Networks".
Hope many of you can make it !


Speaker: Yoshua Bengio, Université de Montréal
Title: On Training Deep Neural Networks

Location: Pavillon André-Aisenstadt (Université de Montréal), room 3195
Time: October 20th 2009, 12h30-13h30

Whereas theoretical work suggests that deep architectures might be more
efficient at representing highly-varying functions, training deep
architectures was unsuccessful until the recent advent of algorithms based
on unsupervised pre-training.  Even though these new algorithms have
enabled training deep models, many questions remain as to the nature of
this difficult learning problem.  We attempt to shed some light on these
questions in several ways, by comparing different successful approaches to
training deep architectures and through extensive simulations investigating
explanatory hypotheses.  The experiments confirm and clarify the advantage
(and sometimes disadvantage) of unsupervised pre-training.  They
demonstrate the robustness of the training procedure with respect to the
random initialization, the positive effect of pre-training in terms of
optimization and its role as a regularizer (in both cases in unusual ways).
We explore explanatory hypotheses based on the notion that early growth of
the model parameters is determinant, and in particular that early use of
unsupervised learning places the dynamics of supervised learning in
attractors associated with local minima with good generalization
properties.  We discuss how several training approaches for deep
architecture may exploit the principle of continuation methods in order to
find good local minima.  In particular we suggest that this is the case of
shaping or the use of a curriculum, showing that it has both an effect on
the speed of convergence of the training process to a minimum and, in the
case of non-convex criteria, on the quality of the local minima obtained.
Finally, we investigate the nature and evolution of gradients at different
levels of a deep supervised neural networks in an attempt to understand why
training is sometimes slowed down, and sometimes possibly stuck in apparent
local minima.

--
Guillaume Desjardins


Plus d'informations sur la liste de diffusion Lisa_seminaires