This week we have a cool new MILA professor, *Ioannis Mitliagkas,* giving a
talk this *Friday Sep 29* at a new, earlier time: *10:30AM* in room *AA6214*.
If time permits, he'll also be giving details of his winter course on
topics in AI!
Hope to see you all there! Don't forget about the earlier time!!
- Michael
*KEYWORDS* optimization, YellowFin, large scale DL, GAN stabilization
*TITLE*
Understanding momentum dynamics for faster training, better scaling, and
easier tuning
*ABSTRACT*
This talk revolves around Polyak’s momentum gradient descent method, also
known as ‘momentum’. Its stochastic version, momentum stochastic gradient
descent (SGD), is one of the most commonly used optimization methods in
deep learning. Throughout the talk we will study a number of important
properties of this versatile method, and see how this understanding can be
used to engineer better deep learning systems.
I will first go over the basic formulation of momentum. Then I will
summarize a theoretical result on a previously unknown connection between
momentum dynamics and asynchronous optimization. Understanding this
connection, allows us to improve the efficiency of large-scale deep
learning systems. I will go over a recent collaboration with Intel and the
National Energy Research Scientific Computing Center (NERSC) on a 15
PetaFLOP system consisting of 9600 nodes. Finally, I will demonstrate how
analyzing the behavior of momentum on simple objectives can lead to tuning
rules for its learning rate and momentum hyperparameters. Our
implementation of these rules is called YellowFin and is a simple adaptive
method that can handle different objectives, as well as varying
asynchronous dynamics, without hand-tuning. Yellowfin often outperforms
state-of-the-art adaptive methods. At the end of the talk, I will discuss
some preliminary thoughts on the training dynamics of GANs and some ideas
on how momentum dynamics can, again, play a key role in stabilizing
adversarial training. Finally, if time permits, I will give an outline my
other research interests, as well as a rough plan for my upcoming ’Topics
in AI’ class.
*BIO*
Ioannis Mitliagkas is an assistant professor in the Department of Computer
Science and Operations Research (DIRO) at the University of
Montreal. Before that, he was a Postdoctoral Scholar with the Department of
Statistics and Computer Science at Stanford University. He obtained his
Ph.D. from the Department of Electrical and Computer Engineering at the
University of Texas at Austin. His research focuses on statistical learning
and inference problems, with work in efficient large-scale and distributed
algorithms, theoretical and data-dependent guarantees and tuning
complex systems. His recent work includes understanding and optimizing the
scanning used in Gibbs sampling for inference, as well as understanding the
interaction between optimization and the dynamics of large-scale learning
systems.