The streaming link is: https://mila.bluejeans.com/4255239897/webrtc
<https://mila.bluejeans.com/4255239897/webrtc>
On Fri, Nov 30, 2018 at 10:21 AM saikrishna gottipati <
saikrishnagv1996(a)gmail.com> wrote:
> Hey all,
>
> Reminder that this is happening in PAA-5340 in 10 minutes
>
> See you there
> Rim and Sai
>
> On Mon, Nov 26, 2018 at 1:18 PM <rim.assouel(a)gmail.com> wrote:
>
>> This week we have *NeurIPS accepted papers* from * Mila* giving a talk
>> on *Fri November 30 2018* at *10:30* in room *AA5340*
>>
>> *TITLE* Happy NeurIPS :)
>>
>> *ABSTRACT*
>> Carrying on the fantastic tradition, we are organizing another lightning
>> talk, this time for NeurIPS! We’ll have authors of NeurIPS accepted papers
>> do a quick, ~5 minute presentations of their work.
>>
>>
>> Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>?
>> Yes
>>
>> See you there!
>> Rim and Sai
>>
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Tea Talks MILA" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to teatalk-orgs+unsubscribe(a)lisa.iro.umontreal.ca.
>> To post to this group, send email to teatalk-orgs(a)lisa.iro.umontreal.ca.
>> To view this discussion on the web visit
>> https://groups.google.com/a/lisa.iro.umontreal.ca/d/msgid/teatalk-orgs/1CA6…
>> <https://groups.google.com/a/lisa.iro.umontreal.ca/d/msgid/teatalk-orgs/1CA6…>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "MILA Tous" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mila-tous+unsubscribe(a)mila.quebec.
>
This week we have NeurIPS accepted papers from Mila giving a talk on Fri November 30 2018 at 10:30 in room AA5340
TITLE Happy NeurIPS :)
ABSTRACT
Carrying on the fantastic tradition, we are organizing another lightning talk, this time for NeurIPS! We’ll have authors of NeurIPS accepted papers do a quick, ~5 minute presentations of their work.
Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>? Yes
See you there!
Rim and Sai
This week we have Sarath Chander from Mila + brain giving a talk on Fri November 23 2018 at 10:30 in room AA3195
Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>? Yes Recorded? Yes
Likely To Deceive : short on puns this week (*meta-pun intended*)
See you there!
Rim and Sai
TITLE RNNs, Long-term Dependencies, and Lifelong Learning
ABSTRACT
Part 1: Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies Modelling long-term dependencies is a challenge for recurrent neural networks. This is primarily due to the fact that gradients vanish during training, as the sequence length increases. Gradients can be attenuated by transition operators and are attenuated or dropped by activation functions. Canonical architectures like LSTM alleviate this issue by skipping information through a memory mechanism. We propose a new recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on a memory mechanism but forgoes both saturating activation functions and saturating gates, in order to further alleviate vanishing gradients. In a series of synthetic and real-world tasks, we demonstrate that the proposed model is the only model that performs among the top 2 models across all tasks with and without long-term dependencies when compared against a range of other architectures. Part 2: Training Recurrent Neural Networks for Lifelong Learning Capacity saturation and catastrophic forgetting are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the life-long learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained on a task with increasing levels of difficulty. As a step towards developing true lifelong learning systems, we unify Gradient Episodic Memory (a catastrophic forgetting alleviation approach) and Net2Net (a capacity expansion approach). Evaluation on the proposed benchmark shows that the unified model is more suitable than the constituent models for lifelong learning setting
BIO
Sarath Chandar is a 4th year Ph.D. Candidate at MILA working with Yoshua Bengio and Hugo Larochelle. His research interests include deep learning, reinforcement learning, and natural language processing. He is a recipient of the IBM Ph.D. Fellowship for 2018. For more details refer to http://sarathchandar.in/.
This week we have Nicolas Loizou from FAIR giving a talk on Fri November 16 2018 at 10:30 AM in room AA3195
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes Recorded? Yes
And you can sign up to meet the speaker here:
Getting lazy on Fridays ? FAIR enough, but Momentum is all you need ;)
See you there!
Rim and Sai
TITLE Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
ABSTRACT
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global non-assymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
BIO
Nicolas is a final year PhD student at The University of Edinburgh in School of Mathematics. More specifically he is a member of the Operational Research and Optimization Group (ERGO) under the supervision of Dr. Peter Richtarik.Before he moved to Edinburgh he spent 4 years in Athens as undergraduate student in department of Mathematics at National and Kapodistrian University of Athens <http://en.uoa.gr/> and 1 year as postgraduate student at Imperial College London where he obtained an MSc in Computing (Computational Management Science) <http://www.imperial.ac.uk/computing/prospective-students/courses/pg/special…>.
His research interests include (but are not limited to): Large Scale Optimization, Machine Learning, Deep Learning, Randomized numerical linear algebra, Randomized and Distributed Algorithms .
Website : https://www.maths.ed.ac.uk/~s1461357/ <https://www.maths.ed.ac.uk/~s1461357/>
Last minute change for a bigger room (this one has a capacity of 130 so we should be fine).
The talk will happen in Z-330, this is the Claire McNicoll building !
See you there,
Rim & Sai
>
>>
>> On Mon, Nov 5, 2018 at 9:33 AM <rim.assouel(a)gmail.com <mailto:rim.assouel@gmail.com>> wrote:
>> This week we have Hugo Larochelle from Mila + Brain giving a talk on Fri November 9 2018 at 10:30 AM in room JC S1-111
>>
>> Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes
>> Recorded? Yes
>>
>> Feel like learning to learn ? Hugo will teach you the good way, this Friday, one shot at a time ;)
>>
>> See you there!
>> Rim and Sai
>>
>> TITLE Few-Shot Learning with Meta-Learning: Progress Made and Challenges Ahead.
>>
>> KEYWORDS
>>
>> ABSTRACT
>> A lot of the recent progress on many AI tasks was enable in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning is a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In meta-learning, our model is itself a learning algorithm: it takes as input a training set and outputs a classifier. For few-shot learning, it is (meta-)trained directly to produce classifiers with good generalization performance for problems with very little labeled data. In this talk, I'll present an overview of the recent research that has made exciting progress on this topic (including my own) and will discuss the challenges as well as research opportunities that remain.
>>
>> BIO
>> Hugo Larochelle is Research Scientist at Google Brain and lead of the Montreal Google Brain team. He is also a member of Yoshua Bengio's Mila and an Adjunct Professor at the Université de Montréal. Previously, he was Associate Professor at the University of Sherbrooke. He also co-founded Whetlab, which was acquired in 2015 by Twitter, where he then worked as a Research Scientist in the Twitter Cortex group. From 2009 to 2011, he was also a member of the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at the Université de Montréal, under the supervision of Yoshua Bengio. His long time nemesis is Aaron Courville. Finally, he has a popular online course on deep learning and neural networks, freely accessible on YouTube.
>>
>> --
>> You received this message because you are subscribed to the Google Groups "MILA Tous" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to mila-tous+unsubscribe(a)mila.quebec <mailto:mila-tous+unsubscribe@mila.quebec>.
>
This week we have Hugo Larochelle from Mila + Brain giving a talk on Fri November 9 2018 at 10:30 AM in room JC S1-111
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes
Recorded? Yes
Feel like learning to learn ? Hugo will teach you the good way, this Friday, one shot at a time ;)
See you there!
Rim and Sai
TITLE Few-Shot Learning with Meta-Learning: Progress Made and Challenges Ahead.
KEYWORDS
ABSTRACT
A lot of the recent progress on many AI tasks was enable in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning is a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In meta-learning, our model is itself a learning algorithm: it takes as input a training set and outputs a classifier. For few-shot learning, it is (meta-)trained directly to produce classifiers with good generalization performance for problems with very little labeled data. In this talk, I'll present an overview of the recent research that has made exciting progress on this topic (including my own) and will discuss the challenges as well as research opportunities that remain.
BIO
Hugo Larochelle is Research Scientist at Google Brain and lead of the Montreal Google Brain team. He is also a member of Yoshua Bengio's Mila and an Adjunct Professor at the Université de Montréal. Previously, he was Associate Professor at the University of Sherbrooke. He also co-founded Whetlab, which was acquired in 2015 by Twitter, where he then worked as a Research Scientist in the Twitter Cortex group. From 2009 to 2011, he was also a member of the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at the Université de Montréal, under the supervision of Yoshua Bengio. His long time nemesis is Aaron Courville. Finally, he has a popular online course on deep learning and neural networks, freely accessible on YouTube.