TeaTalk this Friday, 2:30pm: Equilibrated adaptive learning rates for non-convex optimization Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us desi

Afficher les réponses par date

3364
Age (days ago)
3366
Last active (days ago)

lisa_teatalk@iro.umontreal.ca

1 commentaires
1 participants

Ajouter aux marque-pages Supprimer des marque-pages

étiquettes (0)
participants (1)
  • Jörg Bornschein