There will not be any tea talk tomorrow due to NeurIPS!
See you next week,
PS : Tong Che’s predoc III on Wednesday at 1pm
PPS : Gabriel Huang’s predoc III on Wednesday at 10 am
Rim & Sai
The streaming link is: https://mila.bluejeans.com/4255239897/webrtc
<https://mila.bluejeans.com/4255239897/webrtc>
On Fri, Nov 30, 2018 at 10:21 AM saikrishna gottipati <
saikrishnagv1996(a)gmail.com> wrote:
> Hey all,
>
> Reminder that this is happening in PAA-5340 in 10 minutes
>
> See you there
> Rim and Sai
>
> On Mon, Nov 26, 2018 at 1:18 PM <rim.assouel(a)gmail.com> wrote:
>
>> This week we have *NeurIPS accepted papers* from * Mila* giving a talk
>> on *Fri November 30 2018* at *10:30* in room *AA5340*
>>
>> *TITLE* Happy NeurIPS :)
>>
>> *ABSTRACT*
>> Carrying on the fantastic tradition, we are organizing another lightning
>> talk, this time for NeurIPS! We’ll have authors of NeurIPS accepted papers
>> do a quick, ~5 minute presentations of their work.
>>
>>
>> Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>?
>> Yes
>>
>> See you there!
>> Rim and Sai
>>
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Tea Talks MILA" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to teatalk-orgs+unsubscribe(a)lisa.iro.umontreal.ca.
>> To post to this group, send email to teatalk-orgs(a)lisa.iro.umontreal.ca.
>> To view this discussion on the web visit
>> https://groups.google.com/a/lisa.iro.umontreal.ca/d/msgid/teatalk-orgs/1CA6…
>> <https://groups.google.com/a/lisa.iro.umontreal.ca/d/msgid/teatalk-orgs/1CA6…>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "MILA Tous" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mila-tous+unsubscribe(a)mila.quebec.
>
This week we have NeurIPS accepted papers from Mila giving a talk on Fri November 30 2018 at 10:30 in room AA5340
TITLE Happy NeurIPS :)
ABSTRACT
Carrying on the fantastic tradition, we are organizing another lightning talk, this time for NeurIPS! We’ll have authors of NeurIPS accepted papers do a quick, ~5 minute presentations of their work.
Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>? Yes
See you there!
Rim and Sai
This week we have Sarath Chander from Mila + brain giving a talk on Fri November 23 2018 at 10:30 in room AA3195
Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>? Yes Recorded? Yes
Likely To Deceive : short on puns this week (*meta-pun intended*)
See you there!
Rim and Sai
TITLE RNNs, Long-term Dependencies, and Lifelong Learning
ABSTRACT
Part 1: Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies Modelling long-term dependencies is a challenge for recurrent neural networks. This is primarily due to the fact that gradients vanish during training, as the sequence length increases. Gradients can be attenuated by transition operators and are attenuated or dropped by activation functions. Canonical architectures like LSTM alleviate this issue by skipping information through a memory mechanism. We propose a new recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on a memory mechanism but forgoes both saturating activation functions and saturating gates, in order to further alleviate vanishing gradients. In a series of synthetic and real-world tasks, we demonstrate that the proposed model is the only model that performs among the top 2 models across all tasks with and without long-term dependencies when compared against a range of other architectures. Part 2: Training Recurrent Neural Networks for Lifelong Learning Capacity saturation and catastrophic forgetting are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the life-long learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained on a task with increasing levels of difficulty. As a step towards developing true lifelong learning systems, we unify Gradient Episodic Memory (a catastrophic forgetting alleviation approach) and Net2Net (a capacity expansion approach). Evaluation on the proposed benchmark shows that the unified model is more suitable than the constituent models for lifelong learning setting
BIO
Sarath Chandar is a 4th year Ph.D. Candidate at MILA working with Yoshua Bengio and Hugo Larochelle. His research interests include deep learning, reinforcement learning, and natural language processing. He is a recipient of the IBM Ph.D. Fellowship for 2018. For more details refer to http://sarathchandar.in/.
This week we have Nicolas Loizou from FAIR giving a talk on Fri November 16 2018 at 10:30 AM in room AA3195
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes Recorded? Yes
And you can sign up to meet the speaker here:
Getting lazy on Fridays ? FAIR enough, but Momentum is all you need ;)
See you there!
Rim and Sai
TITLE Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
ABSTRACT
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global non-assymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
BIO
Nicolas is a final year PhD student at The University of Edinburgh in School of Mathematics. More specifically he is a member of the Operational Research and Optimization Group (ERGO) under the supervision of Dr. Peter Richtarik.Before he moved to Edinburgh he spent 4 years in Athens as undergraduate student in department of Mathematics at National and Kapodistrian University of Athens <http://en.uoa.gr/> and 1 year as postgraduate student at Imperial College London where he obtained an MSc in Computing (Computational Management Science) <http://www.imperial.ac.uk/computing/prospective-students/courses/pg/special…>.
His research interests include (but are not limited to): Large Scale Optimization, Machine Learning, Deep Learning, Randomized numerical linear algebra, Randomized and Distributed Algorithms .
Website : https://www.maths.ed.ac.uk/~s1461357/ <https://www.maths.ed.ac.uk/~s1461357/>
Last minute change for a bigger room (this one has a capacity of 130 so we should be fine).
The talk will happen in Z-330, this is the Claire McNicoll building !
See you there,
Rim & Sai
>
>>
>> On Mon, Nov 5, 2018 at 9:33 AM <rim.assouel(a)gmail.com <mailto:rim.assouel@gmail.com>> wrote:
>> This week we have Hugo Larochelle from Mila + Brain giving a talk on Fri November 9 2018 at 10:30 AM in room JC S1-111
>>
>> Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes
>> Recorded? Yes
>>
>> Feel like learning to learn ? Hugo will teach you the good way, this Friday, one shot at a time ;)
>>
>> See you there!
>> Rim and Sai
>>
>> TITLE Few-Shot Learning with Meta-Learning: Progress Made and Challenges Ahead.
>>
>> KEYWORDS
>>
>> ABSTRACT
>> A lot of the recent progress on many AI tasks was enable in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning is a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In meta-learning, our model is itself a learning algorithm: it takes as input a training set and outputs a classifier. For few-shot learning, it is (meta-)trained directly to produce classifiers with good generalization performance for problems with very little labeled data. In this talk, I'll present an overview of the recent research that has made exciting progress on this topic (including my own) and will discuss the challenges as well as research opportunities that remain.
>>
>> BIO
>> Hugo Larochelle is Research Scientist at Google Brain and lead of the Montreal Google Brain team. He is also a member of Yoshua Bengio's Mila and an Adjunct Professor at the Université de Montréal. Previously, he was Associate Professor at the University of Sherbrooke. He also co-founded Whetlab, which was acquired in 2015 by Twitter, where he then worked as a Research Scientist in the Twitter Cortex group. From 2009 to 2011, he was also a member of the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at the Université de Montréal, under the supervision of Yoshua Bengio. His long time nemesis is Aaron Courville. Finally, he has a popular online course on deep learning and neural networks, freely accessible on YouTube.
>>
>> --
>> You received this message because you are subscribed to the Google Groups "MILA Tous" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to mila-tous+unsubscribe(a)mila.quebec <mailto:mila-tous+unsubscribe@mila.quebec>.
>
This week we have Hugo Larochelle from Mila + Brain giving a talk on Fri November 9 2018 at 10:30 AM in room JC S1-111
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes
Recorded? Yes
Feel like learning to learn ? Hugo will teach you the good way, this Friday, one shot at a time ;)
See you there!
Rim and Sai
TITLE Few-Shot Learning with Meta-Learning: Progress Made and Challenges Ahead.
KEYWORDS
ABSTRACT
A lot of the recent progress on many AI tasks was enable in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning is a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In meta-learning, our model is itself a learning algorithm: it takes as input a training set and outputs a classifier. For few-shot learning, it is (meta-)trained directly to produce classifiers with good generalization performance for problems with very little labeled data. In this talk, I'll present an overview of the recent research that has made exciting progress on this topic (including my own) and will discuss the challenges as well as research opportunities that remain.
BIO
Hugo Larochelle is Research Scientist at Google Brain and lead of the Montreal Google Brain team. He is also a member of Yoshua Bengio's Mila and an Adjunct Professor at the Université de Montréal. Previously, he was Associate Professor at the University of Sherbrooke. He also co-founded Whetlab, which was acquired in 2015 by Twitter, where he then worked as a Research Scientist in the Twitter Cortex group. From 2009 to 2011, he was also a member of the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at the Université de Montréal, under the supervision of Yoshua Bengio. His long time nemesis is Aaron Courville. Finally, he has a popular online course on deep learning and neural networks, freely accessible on YouTube.
This week we have our own Gauthier Gidel giving a talk on Fri October 26 2018 at 11:00 in room PCM Z315
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes
Recorded?
Yes you GAN learn a lot going to this talk (too easy?)
See you there!
Rim and Sai
TITLE A Variational Inequality Perspective on Generative Adversarial Networks
KEYWORDS
GANs, variational inequality, mini-max optimization
ABSTRACT
Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, we cast GAN optimization problems in the general variational inequality framework. Tapping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend methods designed for variational inequalities to the training of GANs. We apply averaging, extrapolation and a novel computationally cheaper variant that we call extrapolation from the past to the stochastic gradient method (SGD) and Adam.
BIO
Gauthier Gidel received the Diplôme de l’École Normale Supérieure in 2017 (ULM MPI2013) and the Master of Science MVA from École Normale supérieur Paris-Saclay in 2016. Gauthier is currently pursuing his PhD at Mila and DIRO from Université de Montréal under the supervision of Simon Lacoste-Julien.Gauthier’s PhD thesis topic revolves around saddle point optimization (a.k.a mini-max problems) for machine learning and more generally variational inequalities on which Gauthier has published several papers [Gidel et al. 2017, Gidel et al. 2018].
This week we have *Nick Pawlowski * from * ICL (interning at FAIR) * giving
a talk on *Fri October 19 2018* at *10:30* in *Jean Coutu S1-111*
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>?
Yes
Nick has to be back at FAIR at 2pm so I'd suggest if you want to meet with
him then come out to lunch and you can arrange to meet Nick after.
*Prior* to seeing this ad, I would have been very *uncertain* about coming.
But now it seem like just what the doctor ordered!
Michael
*TITLE* Bayesian Deep Learning and Applications to Medical Imaging
*KEYWORDS *bayesian deep learning, medical applications
*ABSTRACT*
Deep learning revolutionised the way we approach computer vision and
medical image analysis. Regardless of improved accuracy scores and other
metrics, deep learning methods tend to be overconfident on unseen data or
even when predicting the wrong label. Bayesian deep learning offers a
framework to alleviate some of these concerns by modelling the uncertainty
over the weights generating those predictions. This talk will review some
previous achievements of the field and introduce Bayes by Hypernet (BbH).
BbH uses neural networks to parametrise the variational approximation of
the distribution of the parameters. We present more complex parameter
distribution, better robustness to adversarial examples, and improved
uncertainties. Lastly, we present the use of Bayesian NNs for outlier
detection in the medical imaging domain, particularly the application of
Brain lesion detection.
*BIO*
Nick is a PhD student in the Biomedical Image Analysis group at Imperial
College London, supervised by Ben Glocker. He works on methods to integrate
and use uncertainty with deep learning methods. He focuses on Bayesian
neural networks and their use for outlier detection. Nick is currently a
Research Intern at FAIR Montreal and a main developer of DLTK, a toolkit
for deep learning for medical imaging. During this summer he was a Machine
Learning resident at Google X.
This week we have our very own *Devon Hjelm * from * MSR Montreal x Mila *
giving a talk on *Friday October 12 2018* at *10:30* in room *Jean Coutu
S1-111*
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? No
Will this talk be recorded? Yes
Maximize your own information and come learn at this talk!
Michael
*TITLE* Learning representations with Deep InfoMax
*KEYWORDS *representation learning, unsupervised learning, adversarial
learning
*ABSTRACT*
In this work, we perform unsupervised learning of representations by
maximizing mutual information between an input and the output of a deep
neural network encoder. Importantly, we show that structure matters:
incorporating knowledge about locality of the input to the objective can
greatly influence a representation’s suitability for downstream tasks. We
further control characteristics of the representation by matching to a
prior distribution adversarially. Our method, which we call Deep InfoMax
(DIM), outperforms a number of popular unsupervised learning methods and
competes with fully-supervised learning on several classification tasks.
DIM opens new avenues for unsupervised learning of representations and is
an important step towards flexible formulations of representation-learning
objectives for specific end-goals (https://arxiv.org/abs/1808.06670)
*BIO*
Devon Hjelm is a researcher at Microsoft Research Montreal and an Adjunct
Professor at MILA. He did his postdoc at MILA where he focused on
adversarial learning and generative models. His current research focuses on
using mutual information estimation objectives in representation learning
for applications in computer vision, natural language, and RL.