This week we will have two tea talks (morning and afternoon) on Friday! Wow!
The first talk will be *Nicolas Le* *Roux * on *May 25 2018* at *10:30 AM* in room *AA3195* (note the room change, we'll most likely be in 3195 for the summer).
Streaming is, as always, at this link: https://bluejeans.com/809027115/webrtc but watch it live because there will be no recordings!
Don't waver on going to this talk, it is sure to be invariantly excellent! Michael
*TITLE* An exploration of variance reduction techniques in stochastic optimization
*KEYWORDS *deep learning theory, optimization
*ABSTRACT* I will present recent and ongoing work on reducing the variance in stochastic optimization techniques to speed-up and simplify the resulting algorithms. In particular, stochastic gradient methods can suffer from high variance, limiting their convergence speed. While variance reduction techniques exist in the finite case, they are rarer in the online case. We demonstrate how an increasing momentum offers variance reduction in the online case, at the expense of bias, and how that bias can be countered by an extrapolation step. The resulting algorithm differs from iterate averaging in only a factor, but, in the context of the minimization of a quadratic function, this difference is enough to lead to the first algorithm converging both linearly in the noiseless and sublinearly in the homoscedastic noise case when using a constant stepsize.
*BIO* *Nicolas Le Roux got an MSc in Applied Maths from Ecole Centrale Paris and an MSc in Maths, Learning and Vision from ENS Cachan. He got his PhD in 2008 from University of Montreal where he worked with Yoshua Bengio on neural networks in general and their optimisation in particular. He then moved to Microsoft Research Cambridge to work on generative models of images with John Winn. In 2010, he joined Inria in Francis Bach's team to work on large-scale convex optimisation. From 2012 to 2017, he created and managed the research team at Criteo in Paris. He joined Google Brain Montreal in 2017 where he now works on large-scale optimization and reinforcement learning.*