[Lisa_seminaires] [mila-tous] [DAY CHANGE][Tea Talk] Praneeth (MSR India) Thursday December 14 2018 15:30 AA3195

Ioannis Mitliagkas ioannis at iro.umontreal.ca
Mer 12 Déc 11:29:29 EST 2018


All,

try not to miss this talk!

Praneeth Netrapalli is a very successful researcher doing very relevant
work to our community. Their paper "On the insufficiency of existing
momentum schemes for Stochastic Optimization" got an oral presentation at
ICLR 2018.

Also, Praneeth will be visiting Mila tomorrow and Friday and is eager to
meet and talk to you. Please send Rim and me an email and we will arrange
for the meeting!

Best,
Ioannis

On Tue, Dec 11, 2018 at 5:22 PM <rim.assouel at gmail.com> wrote:

> This week we have *Praneeth* from * MSR India* giving a talk on *Thursday
> December 13 2018* at *15:30* in room *AA3195*
>
> Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>?
> Yes
>
> Pay attention to the day change !! This tea talk will happen on THURSDAY
> :)
>
> As it will be the last tea talk of the year, it will be featured with
> actual tea, talk and snacks :)
>
> See you there!
> Rim and Sai
>
> *TITLE* On momentum methods and acceleration in stochastic optimization
>
> *ABSTRACT*
>
> It is well known that momentum gradient methods (e.g., Polyak's heavy
> ball, Nesterov's acceleration) yield significant improvements over vanilla
> gradient descent in deterministic optimization (i.e., where we have access
> to exact gradient of the function to be minimized). However, there is
> widespread sentiment that these momentum methods are not effective for the
> purposes of stochastic optimization due to their instability and error
> accumulation. Numerous works have attempted to quantify these instabilities
> in the face of either statistical or non-statistical errors (Paige, 1971;
> Proakis, 1974; Polyak, 1987; Greenbaum, 1989; Roy and Shynk, 1990; Sharma
> et al., 1998; d’Aspremont, 2008; Devolder et al., 2013, 2014; Yuan et al.,
> 2016) but a precise understanding is lacking. This work considers these
> issues for the special case of stochastic approximation for the linear
> least squares regression problem, and shows that:
>
> 1. classical momentum methods (heavy ball and Nesterov's acceleration)
> indeed do not offer any improvement over stochastic gradient descent, and
> 2. introduces an accelerated stochatic gradient method that provably
> achieves the minimax optimal statistical risk faster than stochastic
> gradient descent (and classical momentum methods).
>
> Critical to the analysis is a sharp characterization of accelerated
> stochastic gradient descent as a stochastic process. While the results are
> rigorously established for the special case of linear least squares
> regression, experiments suggest that the conclusions hold for the training
> of deep neural networks.
> *BIO*
> Praneeth Netrapalli is a researcher at Microsoft Research India, Bengaluru
> since August 2016. Prior to this, he was a postdoctoral researcher at
> Microsoft Research New England in Cambridge, MA. He obtained MS and PhD
> from UT Austin and B-Tech from IIT Bombay all in Electrical Engineering.
> His research focuses on designing efficient algorithms for machine learning
> problems primarily via stochastic and nonconvex optimization. More
> information about his research is available on his home page
> http://praneethnetrapalli.org/
>
> --
> You received this message because you are subscribed to the Google Groups
> "MILA Tous" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mila-tous+unsubscribe at mila.quebec.
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: <http://mailman.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20181212/167cff06/attachment.html>


Plus d'informations sur la liste de diffusion Lisa_seminaires