Soutenance de thèse / PhD defense
Lundi 11 décembre, local 3195 du Pavillon Aisenstadt
Candidat: Li Yao Directeur: Yoshua Bengio Examinateur externe: Sanja Fidler Autres membres du jury: Aaron Courville, Christopher Pal
Learning visual representations with neural networks for image generation and video captioning
The past decade has been marked as a golden era of neural network research. Not only have neural networks been successfully applied to solve more and more challenging real-world problems, but also they have become the dominant approach in many of the places where they have been tested. These places include, for instance, language understanding, game playing, and computer vision, thanks to neural networks' superiority in computational efficiency and statistical capacity.
This work applies neural networks to problems in computer vision where high-level and semantically meaningful representations play a fundamental role. It demonstrates both in theory and in experiment the ability to learn such representations from data with and without supervision.
The main content of the work is divided into two parts. The first part studies neural networks in the context of learning visual representations for the task of video captioning. Models are developed to dynamically focus on different frames while generating a natural language description of a short video. Such a model is further improved by recurrent convolutional operations. The end of this part identifies fundamental challenges in video captioning and proposes a new type of model-based evaluation metric that may be used experimentally as an oracle to benchmark performance.
The second part studies the family of models that generate images. While the first part is supervised, this part is unsupervised. The focus of it is the popular family of Neural Autoregressive Density Estimators (NADEs), a tractable probabilistic model for natural images. This work first makes a connection between NADEs and Generative Stochastic Networks (GSNs). The standard NADE is improved by introducing multiple iterations in its inference without increasing the number of parameters, which is dubbed "iterative NADEs".
Afficher les réponses par date
==> 15h-17h
2017-12-04 21:22 GMT-08:00 Yoshua Bengio yoshua.umontreal@gmail.com:
Soutenance de thèse / PhD defense
Lundi 11 décembre, local 3195 du Pavillon Aisenstadt
Candidat: Li Yao Directeur: Yoshua Bengio Examinateur externe: Sanja Fidler Autres membres du jury: Aaron Courville, Christopher Pal
Learning visual representations with neural networks for image generation and video captioning
The past decade has been marked as a golden era of neural network research. Not only have neural networks been successfully applied to solve more and more challenging real-world problems, but also they have become the dominant approach in many of the places where they have been tested. These places include, for instance, language understanding, game playing, and computer vision, thanks to neural networks' superiority in computational efficiency and statistical capacity.
This work applies neural networks to problems in computer vision where high-level and semantically meaningful representations play a fundamental role. It demonstrates both in theory and in experiment the ability to learn such representations from data with and without supervision.
The main content of the work is divided into two parts. The first part studies neural networks in the context of learning visual representations for the task of video captioning. Models are developed to dynamically focus on different frames while generating a natural language description of a short video. Such a model is further improved by recurrent convolutional operations. The end of this part identifies fundamental challenges in video captioning and proposes a new type of model-based evaluation metric that may be used experimentally as an oracle to benchmark performance.
The second part studies the family of models that generate images. While the first part is supervised, this part is unsupervised. The focus of it is the popular family of Neural Autoregressive Density Estimators (NADEs), a tractable probabilistic model for natural images. This work first makes a connection between NADEs and Generative Stochastic Networks (GSNs). The standard NADE is improved by introducing multiple iterations in its inference without increasing the number of parameters, which is dubbed "iterative NADEs".
lisa_seminaires@iro.umontreal.ca