[Lisa_teatalk] Tea-talk: Thursday, Feb 28th @ 2pm (lisa lab)

Guillaume Desjardins guillaume.desjardins at gmail.com
Wed Feb 27 18:06:27 EST 2013


Razvan and I will be discussing our recent ICLR submissions tomorrow
afternoon at 2:00pm. We have discussed very similar topics in a past
tea-talks, but hopefully things which be much clearer (and convincing) now
that the dust has settled a little bit. Next week, we will have Ian and
Caglar discussing their ICML (maxout) and ICLR submissions respectively.

More volunteers will be needed in 2 weeks for discussing other ICLR
submissions. See you there !


Title: Natural Gradient
Revisited<http://openreview.net/document/d54212e3-fa9a-40c4-991d-320bda524718#d54212e3-fa9a-40c4-991d-320bda524718>
Razvan Pascanu, Yoshua Bengio

The aim of this paper is two-folded. First we intend to show that
Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent
(Vinyals and Povey, 2012) can be described as implementations of Natural
Gradient Descent due to their use of the extended Gauss-Newton
approximation of the Hessian. Secondly we re-derive Natural Gradient from
basic principles, contrasting the difference between the two version of the
algorithm that are in the literature.


Title: Metric-Free Natural Gradient for Joint-Training of Boltzmann
Machines<http://openreview.net/document/88a8fd8a-5af9-4189-9059-9749a5da5bdc#88a8fd8a-5af9-4189-9059-9749a5da5bdc>
Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio

This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for
training Boltzmann Machines. Similar in spirit to the Hessian-Free method
of Martens [8], our algorithm belongs to the family of truncated Newton
methods and exploits an efficient matrix-vector product to avoid
explicitely storing the natural gradient metric $L$. This metric is shown
to be the expected second derivative of the log-partition function (under
the model distribution), or equivalently, the variance of the vector of
partial derivatives of the energy function. We evaluate our method on the
task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG
does indeed have faster per-epoch convergence compared to Stochastic
Maximum Likelihood with centering, though wall-clock performance is
currently not competitive.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20130227/24e8a012/attachment.html 


More information about the Lisa_teatalk mailing list