[Lisa_seminaires] Fwd: Soutenance de thèse de Dumitru Erhan

Yoshua Bengio bengioy at iro.umontreal.ca
Mer 16 Fév 13:45:38 EST 2011



Begin forwarded message:

> From: tappa at iro.umontreal.ca
> Date: February 16, 2011 1:39:22 PM EST (CA)
> To: seminaires at iro.umontreal.ca
> Subject: Soutenance de thèse de Dumitru Erhan
>
> Bonjour,
>
> Vous êtes tous cordialement invités à assister à la soutenance de  
> thèse de
> Dumitru Erhan, qui aura lieu le mercredi 23 février à 10h30 au 1360 du
> pavillon André-Aisenstadt.
>
>
>
> Directeur de Recherche: Yoshua Bengio
> Président-Rapporteur: Alain Tapp
> Membre du Jury: Max Mignotte
> Examinateur Externe: Andrew Y. Ng
>
> Title: Understanding deep architectures and the effect of unsupervised
> pre-training
>
> Abstract:
> This thesis studies a class of algorithms called deep
> architectures. We argue that models that are based on a shallow
> composition of
> local features are not appropriate for the set of real-world  
> functions and
> datasets that are of interest to us, namely data with many factors of
> variation.
> Modelling such functions and datasets is important if we are hoping to
> create an
> intelligent agent that can learn from complicated data. Deep  
> architectures
> are
> hypothesized to be a step in the right direction, as they are
> compositions of nonlinearities and can learn compact
> distributed representations of data with many factors of variation.
>
> Training fully-connected artificial neural networks---the most  
> common form
> of a
> deep architecture---was not possible before Hinton (2006) showed  
> that one can
> use stacks of unsupervised Restricted Boltzmann Machines to  
> initialize or
> pre-train a supervised multi-layer network. This breakthrough has been
> influential, as the basic idea of using unsupervised learning to  
> improve
> generalization in deep networks has been reproduced in a multitude  
> of other
> settings and models.
>
> In this thesis, we cast the deep learning ideas and techniques as  
> defining a
> special kind of inductive bias. This bias is defined not only by the  
> kind of
> functions that are eventually represented by such deep models, but  
> also by
> the
> learning process that is commonly used for them. This work is a  
> study of the
> reasons for why this class of functions generalizes well, the  
> situations
> where
> they should work well, and the qualitative statements that one could  
> make
> about
> such functions.
>
> This thesis is thus an attempt to understand why deep architectures  
> work.
> In the first of the articles presented we study the question of how  
> well our
> intuitions about the need for deep models correspond to functions that
> they can
> actually model well. In the second article we perform an in-depth  
> study of
> why
> unsupervised pre-training helps deep learning and explore a variety of
> hypotheses that give us an intuition for the dynamics of learning in  
> such
> architectures. Finally, in the third article, we want to better  
> understand
> what
> a deep architecture models, qualitatively speaking. Our visualization
> approach
> enables us to understand the representations and invariances  
> modelled and
> learned by deeper layers.
>
>
> --
> Alain Tapp
> Directeur de l'INTRIQ et
> Professeur agrégé
> DIRO, Université de Montréal
> http://www.iro.umontreal.ca/~tappa/



Plus d'informations sur la liste de diffusion Lisa_seminaires