This week we have our very own *Zac Kenton*, a visiting researcher at MILA giving a talk on *Friday Nov 17* at* 10:30AM* in room *AA6214*. I've suggested that the full title should be "Three factors influencing minima in SGD: You'll never believe #1 and #3!"
See you there! Michael
*KEYWORDS:* SGD, Deep Learning Theory, Generalization
*TITLE* Three factors influencing minima in SGD
*ABSTRACT* We focus on the importance of noise in stochastic gradient descent (SGD) based training of deep neural networks (DNNs). We develop theory that studies SGD training as a stochastic differential equation and show that its stationary distribution is related to the loss surface. Our analysis suggests that the combination of batch size, learning rate, and the variance of the true loss gradients acts as a hyper- parameter steering the behavior of SGD and determines the trade-offs between the depth and width of the minima that SGD converges to. In a nutshell, a higher ratio of learning rate to batch size leads to wider minima. We validate our theory by examining the correlation between these three factors and the final performance and sharpness of the minimum found. As a verification of our theory, we empirically demonstrate that the learning dynamics is similar between experiments with different learning rates and batch sizes in SGD if the ratio of learning rate to batch size is the same.
*BIO* Zac studied Mathematics at the University of Cambridge for a bachelors and masters 2009-2013. He then completed a PhD in theoretical physics at the Centre for Research in String Theory, Queen Mary University of London, 2013- August 2017. His thesis was on string theory and early universe inflationary cosmology. In the final stages of his PhD he also worked as a data scientist at ASI Data Science, a London-based data science startup. At MILA he's been working with Stanislaw Jastrzebski, Devansh Arpit and Prof Bengio on topics around generalization in SGD.
Afficher les réponses par date
Reminder: this is in 50 min!
On Mon, Nov 13, 2017, 14:56 Michael Noukhovitch, mnoukhov@gmail.com wrote:
This week we have our very own *Zac Kenton*, a visiting researcher at MILA giving a talk on *Friday Nov 17* at* 10:30AM* in room *AA6214*. I've suggested that the full title should be "Three factors influencing minima in SGD: You'll never believe #1 and #3!"
See you there! Michael
*KEYWORDS:* SGD, Deep Learning Theory, Generalization
*TITLE* Three factors influencing minima in SGD
*ABSTRACT* We focus on the importance of noise in stochastic gradient descent (SGD) based training of deep neural networks (DNNs). We develop theory that studies SGD training as a stochastic differential equation and show that its stationary distribution is related to the loss surface. Our analysis suggests that the combination of batch size, learning rate, and the variance of the true loss gradients acts as a hyper- parameter steering the behavior of SGD and determines the trade-offs between the depth and width of the minima that SGD converges to. In a nutshell, a higher ratio of learning rate to batch size leads to wider minima. We validate our theory by examining the correlation between these three factors and the final performance and sharpness of the minimum found. As a verification of our theory, we empirically demonstrate that the learning dynamics is similar between experiments with different learning rates and batch sizes in SGD if the ratio of learning rate to batch size is the same.
*BIO* Zac studied Mathematics at the University of Cambridge for a bachelors and masters 2009-2013. He then completed a PhD in theoretical physics at the Centre for Research in String Theory, Queen Mary University of London, 2013- August 2017. His thesis was on string theory and early universe inflationary cosmology. In the final stages of his PhD he also worked as a data scientist at ASI Data Science, a London-based data science startup. At MILA he's been working with Stanislaw Jastrzebski, Devansh Arpit and Prof Bengio on topics around generalization in SGD.
lisa_seminaires@iro.umontreal.ca