Forwarding the announcement of Caglar's thesis defense next week
*1pm*
*Friday June 15 * *AA1360*
---------- Forwarded message --------- From: Pierre McKenzie mckenzie@iro.umontreal.ca Date: Fri, Jun 8, 2018 at 3:02 PM Subject: Soutenance de thèse de Caglar Gulcehre To: seminaires@iro.umontreal.ca
SOUTENANCE DE THESE DE DOCTORAT
Département d'informatique et de recherche opérationnelle Université de Montréal
CANDIDAT: Caglar Gulcehre
TITRE: Learning and Time: on Using Memory and Curricula for Natural Language Understanding
DATE: vendredi 15 juin 2018 HEURE: 13:00 ENDROIT: Local 1360 Pavillon André-Aisenstadt Université de Montréal
RÉSUMÉ:
In this thesis, I present some of the steps that we took towards advancing natural language understanding and learning long-term dependencies. The goal of those advancements is to keep us on the path to develop better artificial intelligence algorithms by using deep learning based architectures. Deep-learning architectures have a profound effect on various language understanding applications such as summarization, machine translation, language modeling and image caption generation. I will be summarizing five different papers that we have written during my Ph.D
In our first article, we propose a novel method to utilize the abundant amount of available monolingual data for training neural machine translation models. We have accomplished this goal by training a long short-term memory (LSTM) language model on a large monolingual corpus first and then fusing the outputs or the hidden representations of the LSTM language model with the decoder of the neural machine translation model which is trained end to end using an attention mechanism.
In our second paper, we propose an approach to address the problem of rare words in general for natural language processing tasks. Our approach augments the encoder-decoder architecture with attention model by replacing the softmax layer with our proposed pointer-softmax layer that defines pointers to the source sentences when the decoder predicts.
In our third paper, we propose two new approaches to learn alignments in a sequence to sequence model. Our model addresses the difficulty of learning alignments between the source and the target context that arises when the source context is very long.
In "Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes," we propose a new approach for augmenting neural networks with an explicit memory mechanism. Our model achieves promising results on question answering and algorithmic tasks.
Finally I will conclude with our "Noisy Activation Functions" paper, in which we propose a novel activation function that makes the activations are stochastic by injecting the noise to them.
JURY:
Président-rapporteur: Pierre McKenzie (DIRO) Directeur de recherche: Yoshua Bengio (DIRO) Membre du jury: Simon Julien-Lacoste (DIRO) Examinateur externe: Christopher Manning (Stanford)
Bienvenue à tous. Welcome to all. la présentation sera en anglais.
Afficher les réponses par date
*Reminder*: this is in 20 minutes!
On Fri, Jun 8, 2018 at 4:27 PM Michael Noukhovitch mnoukhov@gmail.com wrote:
Forwarding the announcement of Caglar's thesis defense next week
*1pm*
*Friday June 15 * *AA1360*
---------- Forwarded message --------- From: Pierre McKenzie mckenzie@iro.umontreal.ca Date: Fri, Jun 8, 2018 at 3:02 PM Subject: Soutenance de thèse de Caglar Gulcehre To: seminaires@iro.umontreal.ca
SOUTENANCE DE THESE DE DOCTORAT
Département d'informatique et de recherche opérationnelle Université de Montréal
CANDIDAT: Caglar Gulcehre
TITRE: Learning and Time: on Using Memory and Curricula for Natural Language Understanding
DATE: vendredi 15 juin 2018 HEURE: 13:00 ENDROIT: Local 1360 Pavillon André-Aisenstadt Université de Montréal
RÉSUMÉ:
In this thesis, I present some of the steps that we took towards advancing natural language understanding and learning long-term dependencies. The goal of those advancements is to keep us on the path to develop better artificial intelligence algorithms by using deep learning based architectures. Deep-learning architectures have a profound effect on various language understanding applications such as summarization, machine translation, language modeling and image caption generation. I will be summarizing five different papers that we have written during my Ph.D
In our first article, we propose a novel method to utilize the abundant amount of available monolingual data for training neural machine translation models. We have accomplished this goal by training a long short-term memory (LSTM) language model on a large monolingual corpus first and then fusing the outputs or the hidden representations of the LSTM language model with the decoder of the neural machine translation model which is trained end to end using an attention mechanism.
In our second paper, we propose an approach to address the problem of rare words in general for natural language processing tasks. Our approach augments the encoder-decoder architecture with attention model by replacing the softmax layer with our proposed pointer-softmax layer that defines pointers to the source sentences when the decoder predicts.
In our third paper, we propose two new approaches to learn alignments in a sequence to sequence model. Our model addresses the difficulty of learning alignments between the source and the target context that arises when the source context is very long.
In "Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes," we propose a new approach for augmenting neural networks with an explicit memory mechanism. Our model achieves promising results on question answering and algorithmic tasks.
Finally I will conclude with our "Noisy Activation Functions" paper, in which we propose a novel activation function that makes the activations are stochastic by injecting the noise to them.
JURY:
Président-rapporteur: Pierre McKenzie (DIRO) Directeur de recherche: Yoshua Bengio (DIRO) Membre du jury: Simon Julien-Lacoste (DIRO) Examinateur externe: Christopher Manning (Stanford)
Bienvenue à tous. Welcome to all. la présentation sera en anglais.