In this thesis, a subgroup of deep learning models, known as recurrent neural networks is studied in depth. Recurrent neural networks are special types of artificial neural networks that possess more strength in modeling temporal structures of sequential data such as text and speech. Recurrent neural networks are used as the core module of many practical applications including speech recognition, text-to-speech, machine translation, machine comprehension, and question and answering. This thesis includes a series of studies towards deep multiscale recurrent neural networks and novel architectures to overcome the inherent problems of recurrent neural networks.
There are three articles that propose advanced network architectures to implement deep multiscale recurrent neural networks. In the first article, we introduce a new type of network architecture that adds more communication channels to the recurrent neural networks. The recurrence is not only restricted to self-connections as the conventional recurrent neural networks do but fully connected between all hidden layers at consecutive time steps. the influence of information passing through the channel is adaptively controlled by parameterized gating units. In the second article, we study a neural machine translation system that exploits a character-level decoder. The motivation behind this work is to answer a fundamental question of generating a sequence of characters as translation instead of a sequence of words. We design a two-layered recurrent neural network architecture that captures fast and slow components of a sequence in a separate manner. In the third article, we investigate a recurrent neural network architecture that can change the states of hidden layers in multiple timescales in order to capture the hierarchical temporal structure of sequences. The proposed framework introduces a set of boundary detecting units that are used to find terminations of meaningful chunks. The inclusion of the boundary detectors leads to a novel update mechanism that allows the recurrent neural networks to update each hidden layer with a different timescale based on the states of the boundary detectors.
Finally, in the fourth article, we study the inclusion of latent variables to recurrent neural networks. The complexity and high signal-to-noise ratio of sequential data such as speech make it difficult to learn meaningful structures from the data. We propose a recurrent extension of the variational auto-encoder in order to introduce high-level latent variables to recurrent neural networks and show significant performance improvement on sequence modeling tasks such as human speech signals and handwriting examples.
-- S.v.p. Mentionner votre nom et numéro de matricule dans votre courriel -- Céline Bégin Technicienne à la gestion des dossiers étudiants Programmes Maitrise informatique, Maitrise en Commerce électronique et Doctorat en informatique Université de Montréal Département d'informatique et de recherche opérationnelle Pavillon André Aisenstadt, 2920 chemin de la tour, bureau 2151 Montréal (Québec) H3T 1J4 téléphone : 514-343-6111 poste 3492 télécopieur: 514-343-5834 courriel : agde3@iro.umontreal.ca