[Lisa_seminaires] [Lisa_montreal] [Tea Talk] Ioannis Mitliagkas, Fri Sep 29, 10:30AM, AA6214

Joseph Paul Cohen joseph at josephpcohen.com
Ven 29 Sep 10:11:18 EDT 2017


There will be food!

On Sep 26, 2017 09:59, "Michael Noukhovitch" <mnoukhov at gmail.com> wrote:

> This week we have a cool new MILA professor, *Ioannis Mitliagkas,* giving
> a talk this *Friday Sep 29* at a new, earlier time: *10:30AM* in room
> *AA6214*. If time permits, he'll also be giving details of his winter
> course on topics in AI!
>
> Hope to see you all there! Don't forget about the earlier time!!
> - Michael
>
> *KEYWORDS* optimization, YellowFin, large scale DL, GAN stabilization
>
> *TITLE*
> Understanding momentum dynamics for faster training, better scaling, and
> easier tuning
>
>
> *ABSTRACT*
> This talk revolves around Polyak’s momentum gradient descent method, also
> known as ‘momentum’. Its stochastic version, momentum stochastic gradient
> descent (SGD), is one of the most commonly used optimization methods in
> deep learning. Throughout the talk we will study a number of important
> properties of this versatile method, and see how this understanding can be
> used to engineer better deep learning systems.
>
> I will first go over the basic formulation of momentum. Then I will
> summarize a theoretical result on a previously unknown connection between
> momentum dynamics and asynchronous optimization. Understanding this
> connection, allows us to improve the efficiency of large-scale deep
> learning systems. I will go over a recent collaboration with Intel and the
> National Energy Research Scientific Computing Center (NERSC) on a 15
> PetaFLOP system consisting of 9600 nodes. Finally, I will demonstrate how
> analyzing the behavior of momentum on simple objectives can lead to tuning
> rules for its learning rate and momentum hyperparameters. Our
> implementation of these rules is called YellowFin and is a simple adaptive
> method that can handle different objectives, as well as varying
> asynchronous dynamics, without hand-tuning. Yellowfin often outperforms
> state-of-the-art adaptive methods. At the end of the talk, I will discuss
> some preliminary thoughts on the training dynamics of GANs and some ideas
> on how momentum dynamics can, again, play a key role in stabilizing
> adversarial training. Finally, if time permits, I will give an outline my
> other research interests, as well as a rough plan for my upcoming ’Topics
> in AI’ class.
>
>
> *BIO*
> Ioannis Mitliagkas is an assistant professor in the Department of Computer
> Science and Operations Research (DIRO) at the University of
> Montreal. Before that, he was a Postdoctoral Scholar with the Department of
> Statistics and Computer Science at Stanford University. He obtained his
> Ph.D. from the Department of Electrical and Computer Engineering at the
> University of Texas at Austin. His research focuses on statistical learning
> and inference problems, with work in efficient large-scale and distributed
> algorithms, theoretical and data-dependent guarantees and tuning
> complex systems. His recent work includes understanding and optimizing the
> scanning used in Gibbs sampling for inference, as well as understanding the
> interaction between optimization and the dynamics of large-scale learning
> systems.
>
> _______________________________________________
> Lisa_montreal mailing list
> Lisa_montreal at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_montreal
>
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20170929/f44957f5/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires