Where is everybody? The tea-talk is in 45 minutes and I see almost no one in the lab.
Please come to at least honour our guest (who is starting a faculty position at UBC in Vancouver).
-- Yoshua
On Tue, Aug 19, 2014 at 10:52 AM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
We have a talk by Prof. Mark Schmidt from University of British Columbia (UBC). He will tell us about 'stochastic average gradient' method.
I apologize for a late announcement, but hope to see many of you tomorrow!
- Cho
===
- Speaker: Prof. Mark Schmidt (UBC)
- Date and Time: 20 Aug @13.30
- Place: AA3195
- Title: Stochastic Average Gradient
- Abstract:
We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. Specifically, under standard assumptions the convergence rate is improved from [image: O(\frac{1}{k})] to a linear convergence rate of the form [image: O(p^k)] for some [image: p < 1]. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Beyond these theoretical results, the algorithm also has a variety of appealing practical properties: it supports regularization and sparse datasets, it allows an adaptive step-size and has a termination criterion, it allows mini-batches, and its performance can be further improved by non-uniform sampling. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.
Lisa_teatalk mailing list Lisa_teatalk@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_teatalk