[Lisa_seminaires] [Tea Talk] Sarath Chander (Mila + brain) Fri November 23 2018 10:30 AA3195

rim.assouel at gmail.com rim.assouel at gmail.com
Mar 20 Nov 11:04:19 EST 2018


This week we have Sarath Chander from Mila + brain giving a talk on Fri November 23 2018 at 10:30 in room AA3195 

Will this talk be streamed <https://mila.bluejeans.com/4255239897/webrtc>? Yes Recorded? Yes 

Likely To Deceive : short on puns this week (*meta-pun intended*)

See you there! 
Rim and Sai 

TITLE RNNs, Long-term Dependencies, and Lifelong Learning


ABSTRACT 
Part 1: Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies Modelling long-term dependencies is a challenge for recurrent neural networks. This is primarily due to the fact that gradients vanish during training, as the sequence length increases. Gradients can be attenuated by transition operators and are attenuated or dropped by activation functions. Canonical architectures like LSTM alleviate this issue by skipping information through a memory mechanism. We propose a new recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on a memory mechanism but forgoes both saturating activation functions and saturating gates, in order to further alleviate vanishing gradients. In a series of synthetic and real-world tasks, we demonstrate that the proposed model is the only model that performs among the top 2 models across all tasks with and without long-term dependencies when compared against a range of other architectures. Part 2: Training Recurrent Neural Networks for Lifelong Learning Capacity saturation and catastrophic forgetting are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the life-long learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained on a task with increasing levels of difficulty. As a step towards developing true lifelong learning systems, we unify Gradient Episodic Memory (a catastrophic forgetting alleviation approach) and Net2Net (a capacity expansion approach). Evaluation on the proposed benchmark shows that the unified model is more suitable than the constituent models for lifelong learning setting 

BIO 
Sarath Chandar is a 4th year Ph.D. Candidate at MILA working with Yoshua Bengio and Hugo Larochelle. His research interests include deep learning, reinforcement learning, and natural language processing. He is a recipient of the IBM Ph.D. Fellowship for 2018. For more details refer to http://sarathchandar.in/.
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: <http://mailman.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20181120/3e80fd1a/attachment.html>


Plus d'informations sur la liste de diffusion Lisa_seminaires