[Lisa_seminaires] [mila-tous] [reminder] [Tea Talk] Sarath Chandar (Mila + brain) Fri November 23 2018 10:30 AA3195

Pablo Fonseca palefo at gmail.com
Ven 23 Nov 13:02:18 EST 2018


The recording is here: https://bluejeans.com/s/MZCfM/
<https://bluejeans.com/s/MZCfM/>


On Fri, Nov 23, 2018 at 9:59 AM Rim Assouel <rim.assouel at gmail.com> wrote:

> This happens in 30 minutes !
>
> Le 20 nov. 2018 à 11:04, rim.assouel at gmail.com a écrit :
>
> This week we have *Sarath Chandar* from * Mila + brain* giving a talk on *Fri November
> 23 2018* at *10:30* in room *AA3195*
>
> Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>?
> Yes Recorded? Yes
>
> Likely To Deceive : short on puns this week (*meta-pun intended*)
>
> See you there!
> Rim and Sai
>
> *TITLE* RNNs, Long-term Dependencies, and Lifelong Learning
>
>
> *ABSTRACT*
> Part 1: Towards Non-saturating Recurrent Units for Modelling Long-term
> Dependencies Modelling long-term dependencies is a challenge for recurrent
> neural networks. This is primarily due to the fact that gradients vanish
> during training, as the sequence length increases. Gradients can be
> attenuated by transition operators and are attenuated or dropped by
> activation functions. Canonical architectures like LSTM alleviate this
> issue by skipping information through a memory mechanism. We propose a new
> recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on
> a memory mechanism but forgoes both saturating activation functions and
> saturating gates, in order to further alleviate vanishing gradients. In a
> series of synthetic and real-world tasks, we demonstrate that the proposed
> model is the only model that performs among the top 2 models across all
> tasks with and without long-term dependencies when compared against a range
> of other architectures. Part 2: Training Recurrent Neural Networks for
> Lifelong Learning Capacity saturation and catastrophic forgetting are the
> central challenges of any parametric lifelong learning system. In this
> work, we study these challenges in the context of sequential supervised
> learning with an emphasis on recurrent neural networks. To evaluate the
> models in the life-long learning setting, we propose a curriculum-based,
> simple, and intuitive benchmark where the models are trained on a task with
> increasing levels of difficulty. As a step towards developing true lifelong
> learning systems, we unify Gradient Episodic Memory (a catastrophic
> forgetting alleviation approach) and Net2Net (a capacity expansion
> approach). Evaluation on the proposed benchmark shows that the unified
> model is more suitable than the constituent models for lifelong learning
> setting
>
> *BIO*
> Sarath Chandar is a 4th year Ph.D. Candidate at MILA working with Yoshua
> Bengio and Hugo Larochelle. His research interests include deep learning,
> reinforcement learning, and natural language processing. He is a recipient
> of the IBM Ph.D. Fellowship for 2018. For more details refer to
> http://sarathchandar.in/.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "MILA Tous" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mila-tous+unsubscribe at mila.quebec.
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: <http://mailman.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20181123/bdc3f2fd/attachment.html>


Plus d'informations sur la liste de diffusion Lisa_seminaires