[Lisa_seminaires] 2 AISTATS 2010 practice talks
Yoshua Bengio
bengioy at iro.umontreal.ca
Sam 20 Mar 16:14:04 EDT 2010
Hi,
The following two AISTATS 2010 papers will be presented on March 24th,
starting at 10h30, in room Z-205 of Pavillon
at U. Montreal, by their first authors, respectively Guillaume
Desjardins and Xavier Glorot:
-------------------------------------------------------------------------------------------------------------------------------------------------
Guillaume Desjardins (with Aaron Courville, Yoshua Bengio, Pascal
Vincent, Olivier Delalleau)
Parallel Tempering for Training of Restricted Boltzmann Machines
Alternating Gibbs sampling between visible and latent units is the
most common scheme used for sampling from Restricted Boltzmann
Machines (RBM), a crucial component in deep architectures such as Deep
Belief Networks. However, we find that it often does a very poor job
of rendering the diversity of modes captured by the trained model. We
suspect that this property hinders RBM training methods such as the
Persistent Contrastive Divergence algorithm that rely on Gibbs
sampling to approximate the likelihood gradient. To alleviate this
problem, we explore the use of tempered Markov Chain Monte-Carlo for
sampling in RBMs. We find both through visualization of samples and
measures of likelihood on a toy dataset that it helps both sampling
and learning.
-------------------------------------------------------------------------------------------------------------------------------------------------
Xavier Glorot (with Yoshua Bengio)
Understanding the difficulty of training deep feedforward neural
networks
Whereas before 2006 it appears that deep multi-layer neural networks
were not successfully trained, since then several algorithms have been
shown to successfully train them, with experimental results showing
the superiority of deeper vs less deep architectures. All these
experimental results were obtained with new initialization or training
mechanisms. Our objective here is to understand better why standard
gradient descent from random initialization is doing so poorly with
deep neural networks, to better understand these recent relative
successes and help design better algorithms in the future. We first
observe the influence of the non-linear activations functions. We find
that the logistic sigmoid activation is unsuited for deep networks
with random initialization because of its mean value, which can drive
especially the top hidden layer into saturation. Surprisingly, we find
that saturated units can move out of saturation by themselves, albeit
slowly, and explaining the plateaus sometimes seen when training
neural networks. We find that a new non-linearity that saturates less
can often be beneficial. Finally, we study how activations and
gradients vary across layers and during training, with the idea that
training may be more difficult when the singular values of the
Jacobian associated with each layer are far from 1. Based on these
considerations, we propose a new initialization scheme that brings
substantially faster convergence.
-------------------------------------------------------------------------------------------------------------------------------------------------
-- Yoshua
Plus d'informations sur la liste de diffusion Lisa_seminaires