Reminder : this happens in 1h !
Le 11 déc. 2018 à 17:20, rim.assouel@gmail.com a écrit :
This week we have *Praneeth* from * MSR India* giving a talk on *Thursday
December 13 2018* at *15:30* in room *AA3195*
Will this talk be streamed https://mila.bluejeans.com/4255239897/webrtc?
Yes
Pay attention to the day change !! This tea talk will happen on THURSDAY
:)
As it will be the last tea talk of the year, it will be featured with
actual tea, talk and snacks :)
See you there!
Rim and Sai
*TITLE* On momentum methods and acceleration in stochastic optimization
*ABSTRACT*
It is well known that momentum gradient methods (e.g., Polyak's heavy
ball, Nesterov's acceleration) yield significant improvements over vanilla
gradient descent in deterministic optimization (i.e., where we have access
to exact gradient of the function to be minimized). However, there is
widespread sentiment that these momentum methods are not effective for the
purposes of stochastic optimization due to their instability and error
accumulation. Numerous works have attempted to quantify these instabilities
in the face of either statistical or non-statistical errors (Paige, 1971;
Proakis, 1974; Polyak, 1987; Greenbaum, 1989; Roy and Shynk, 1990; Sharma
et al., 1998; d’Aspremont, 2008; Devolder et al., 2013, 2014; Yuan et al.,
2016) but a precise understanding is lacking. This work considers these
issues for the special case of stochastic approximation for the linear
least squares regression problem, and shows that:
- classical momentum methods (heavy ball and Nesterov's acceleration)
indeed do not offer any improvement over stochastic gradient descent, and
2. introduces an accelerated stochatic gradient method that provably
achieves the minimax optimal statistical risk faster than stochastic
gradient descent (and classical momentum methods).
Critical to the analysis is a sharp characterization of accelerated
stochastic gradient descent as a stochastic process. While the results are
rigorously established for the special case of linear least squares
regression, experiments suggest that the conclusions hold for the training
of deep neural networks.
*BIO*
Praneeth Netrapalli is a researcher at Microsoft Research India, Bengaluru
since August 2016. Prior to this, he was a postdoctoral researcher at
Microsoft Research New England in Cambridge, MA. He obtained MS and PhD
from UT Austin and B-Tech from IIT Bombay all in Electrical Engineering.
His research focuses on designing efficient algorithms for machine learning
problems primarily via stochastic and nonconvex optimization. More
information about his research is available on his home page
http://praneethnetrapalli.org/
--
You received this message because you are subscribed to the Google Groups
"MILA Tous" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to mila-tous+unsubscribe@mila.quebec.