[Lisa_teatalk] TeaTalk this Friday, 2:30pm: Equilibrated adaptive learning rates for non-convex optimization Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us desi