[Lisa_seminaires] [Lisa_labo] ICML-practice talks: Thursday, June 13th, AA3195, @ 2:00PM

Yoshua Bengio yoshua.bengio at gmail.com
Jeu 13 Juin 13:58:00 EDT 2013


ICML practice talks starting in 2 minutes at 3195!

--Yoshua

On 2013-06-12, at 14:27, Yoshua Bengio wrote:

> 
> Actually for Razvan it is a full ICML oral.
> 
> -- Yoshua
> 
> On 2013-06-12, at 13:57, Guillaume Desjardins wrote:
> 
>> Please join us tomorrow afternoon (2:00-3:30PM) for a special round of
>> ICML practice talks. Speakers will be Razvan Pascanu (remotely via
>> Google Hangouts) and Yoshua Bengio (for both a workshop talk, and a 5
>> min ICML spotlight). Titles and abstract below.
>> 
>> The talk will be held in AA3195 as usual.
>> 
>> 
>> Speaker: Razvan Pascanu
>> Title: On the difficulty of training Recurrent Neural Networks
>> 
>> There are two widely known issues with properly training Recurrent
>> Neural Networks, the vanishing and the exploding gradient problems
>> detailed in Bengio et al. (1994). In this paper we attempt to improve
>> the understanding of the underlying issues by exploring these problems
>> from an analytical, a geometric and a dynamical systems perspective.
>> Our analysis is used to justify a simple yet effective solution. We
>> propose a gradient norm clipping strategy to deal with exploding
>> gradients and a soft constraint for the vanishing gradients problem.
>> We validate empirically our hypothesis and proposed solutions in the
>> experimental section.
>> 
>> 
>> Speaker: Yoshua Bengio (workshop talk)
>> Title: From Latent Anonymous Variables to Deep Stochastic Networks
>> 
>> Graphical models with anonymous latent variables (such as Markov
>> Random Fields and variants of Boltzmann machines) are very powerful
>> models which have had a great impact in machine learning. We propose
>> here to consider a type of models over the joint distribution of
>> observed variables that shares many of the properties of these
>> anonymous latent variable models, but without the need for potentially
>> hurtful approximate inference or approximation of the partition
>> function during training (or both, like in Deep Boltzmann Machines).
>> The only approximation is function approximation.  The proposed deep
>> stochastic networks can provably estimate the underlying joint
>> distribution of the data (or a conditional, if discriminant learning
>> is preferred), in the sense that they are consistent estimators of
>> this joint or conditional distribution.  However, unlike the usual
>> probabilistic models, they only represent the learned distribution
>> implicitly, through the convergent distribution of a Markov chain.
>> Like them, though, they can handle missing values and structured
>> output. Unlike them, they can be trained by straightforward gradient
>> descent in a deep (possibly recurrent) network in which noise is
>> injected. Most of the impressive experimental progress in deep
>> learning in recent years (in particular for speech and object
>> recognition) has been with with deep supervised networks, sometimes
>> with noise injected (dropout). Interestingly, the same kinds of
>> architectures can be used to train the proposed unsupervised (or
>> structured output) models.
>> 
>> 
>> Speaker: Yoshua Bengio (ICML spotlight).
>> Better Mixing via Deep Representations
>> 
>> It has been hypothesized, and supported with experimental evidence,
>> that deeper representations, when well trained, tend to do a better
>> job at disentangling the underlying factors of variation. We study the
>> following related conjecture: better representations, in the sense of
>> better disentangling, can be exploited to produce Markov chains that
>> mix faster between modes. Consequently, mixing between modes would be
>> more efficient at higher levels of representation. To better
>> understand this, we propose a secondary conjecture: the higher-level
>> samples fill more uniformly the space they occupy and the high-density
>> manifolds tend to unfold when represented at higher levels. The paper
>> discusses these hypotheses and tests them experimentally through
>> visualization and measurements of mixing between modes and
>> interpolating between samples.
>> _______________________________________________
>> Lisa_labo mailing list
>> Lisa_labo at iro.umontreal.ca
>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
> 
> _______________________________________________
> Lisa_labo mailing list
> Lisa_labo at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo



Plus d'informations sur la liste de diffusion Lisa_seminaires