[Lisa_teatalk] TeaTalk this Friday, 2:30pm: Equilibrated adaptive learning rates for non-convex optimization Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the so- called equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that

Jörg Bornschein bornj at iro.umontreal.ca
Wed Nov 25 14:54:56 EST 2015


Hi,

I would like to announce this weeks Tea Talk: Harm de Vries will talk about
"Equilibrated adaptive learning rates for non-convex optimization" and
about optimization challenges in deep learning in general.


When: Friday, November 27th, 14:30 to 15:30
Where: AA3195
Who: Harm de Vries
Title: Equilibrated adaptive learning rates for non-convex optimization
Link: http://arxiv.org/abs/1502.04390


== Abstract ==

Parameter-specific adaptive learning rate methods are computationally
efficient
ways to reduce the ill-conditioning problems encountered when training large
deep networks. Following recent work that strongly suggests that most of the
critical points encountered when training such networks are saddle points,
we find
how considering the presence of negative eigenvalues of the Hessian could
help
us design better suited adaptive learning rate schemes. We show that the
popular
Jacobi preconditioner has undesirable behavior in the presence of both
positive
and negative curvature, and present theoretical and empirical evidence that
the so-
called equilibration preconditioner is comparatively better suited to
non-convex
problems. We introduce a novel adaptive learning rate scheme, called ESGD,
based on the equilibration preconditioner. Our experiments show that ESGD
per-
forms as well or better than RMSProp in terms of convergence speed, always
clearly improving over plain stochastic gradient descent.



Hope to see you on Friday,


j
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20151125/4cf30046/attachment.html 


More information about the Lisa_teatalk mailing list