Hi,
I would like to announce this weeks Tea Talk: Harm de Vries will talk about "Equilibrated adaptive learning rates for non-convex optimization" and about optimization challenges in deep learning in general.
When: Friday, November 27th, 14:30 to 15:30
Where: AA3195
Who: Harm de Vries
Title: Equilibrated adaptive learning rates for non-convex optimization
Link: http://arxiv.org/abs/1502.04390
== Abstract ==
Parameter-specific adaptive learning rate methods are computationally efficient
ways to reduce the ill-conditioning problems encountered when training large
deep networks. Following recent work that strongly suggests that most of the
critical points encountered when training such networks are saddle points, we find
how considering the presence of negative eigenvalues of the Hessian could help
us design better suited adaptive learning rate schemes. We show that the popular
Jacobi preconditioner has undesirable behavior in the presence of both positive
and negative curvature, and present theoretical and empirical evidence that the so-
called equilibration preconditioner is comparatively better suited to non-convex
problems. We introduce a novel adaptive learning rate scheme, called ESGD,
based on the equilibration preconditioner. Our experiments show that ESGD per-
forms as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent.
Hope to see you on Friday,
j