This week we have Sarath Chander from Mila + brain giving a talk on Fri November 23 2018 at 10:30 in room AA3195

Will this talk be streamed? Yes Recorded? Yes

Likely To Deceive : short on puns this week (*meta-pun intended*)

See you there!
Rim and Sai

TITLE RNNs, Long-term Dependencies, and Lifelong Learning


ABSTRACT
Part 1: Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies Modelling long-term dependencies is a challenge for recurrent neural networks. This is primarily due to the fact that gradients vanish during training, as the sequence length increases. Gradients can be attenuated by transition operators and are attenuated or dropped by activation functions. Canonical architectures like LSTM alleviate this issue by skipping information through a memory mechanism. We propose a new recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on a memory mechanism but forgoes both saturating activation functions and saturating gates, in order to further alleviate vanishing gradients. In a series of synthetic and real-world tasks, we demonstrate that the proposed model is the only model that performs among the top 2 models across all tasks with and without long-term dependencies when compared against a range of other architectures. Part 2: Training Recurrent Neural Networks for Lifelong Learning Capacity saturation and catastrophic forgetting are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the life-long learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained on a task with increasing levels of difficulty. As a step towards developing true lifelong learning systems, we unify Gradient Episodic Memory (a catastrophic forgetting alleviation approach) and Net2Net (a capacity expansion approach). Evaluation on the proposed benchmark shows that the unified model is more suitable than the constituent models for lifelong learning setting

BIO
Sarath Chandar is a 4th year Ph.D. Candidate at MILA working with Yoshua Bengio and Hugo Larochelle. His research interests include deep learning, reinforcement learning, and natural language processing. He is a recipient of the IBM Ph.D. Fellowship for 2018. For more details refer to http://sarathchandar.in/.