Razvan and I will be discussing our recent ICLR submissions tomorrow afternoon at 2:00pm. We have discussed very similar topics in a past tea-talks, but hopefully things which be much clearer (and convincing) now that the dust has settled a little bit. Next week, we will have Ian and Caglar discussing their ICML (maxout) and ICLR submissions respectively.
More volunteers will be needed in 2 weeks for discussing other ICLR submissions. See you there !
Title: Natural Gradient Revisited
Razvan Pascanu, Yoshua Bengio
The aim of this paper is two-folded. First we intend to show that
Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent
(Vinyals and Povey, 2012) can be described as implementations of Natural
Gradient Descent due to their use of the extended Gauss-Newton
approximation of the Hessian. Secondly we re-derive Natural Gradient
from basic principles, contrasting the difference between the two
version of the algorithm that are in the literature.
Title: Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio
This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm
for training Boltzmann Machines. Similar in spirit to the Hessian-Free
method of Martens [8], our algorithm belongs to the family of truncated
Newton methods and exploits an efficient matrix-vector product to avoid
explicitely storing the natural gradient metric $L$. This metric is
shown to be the expected second derivative of the log-partition function
(under the model distribution), or equivalently, the variance of the
vector of partial derivatives of the energy function. We evaluate our
method on the task of joint-training a 3-layer Deep Boltzmann Machine
and show that MFNG does indeed have faster per-epoch convergence
compared to Stochastic Maximum Likelihood with centering, though
wall-clock performance is currently not competitive.