Hi all,

Just a kind reminder - it's today!

Dima

On Wed, Jul 12, 2017, 15:26 Dzmitry Bahdanau <dimabgv@gmail.com> wrote:
Hi all, 

Our next speaker is Ethan Perez, who is currently interning at MILA with Aaron Courville. Please come to AA6214 on July 14 at 13:45!

Title: Learning Visual Reasoning Without Strong Priors

Abstract:
Achieving artificial visual reasoning - the ability to answer image-related questions which require a multi-step, high-level process - is an important step towards artificial general intelligence. This multi-modal task requires learning a question-dependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization method achieves state-of-the-art results on the Compositional Language and Elementary Visual Reasoning (CLEVR) task with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) which uses data augmentation and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.

Bio:
Ethan Perez is a rising 4th year computer science undergrad at Rice University. He is currently interning at MILA working with Aaron Courville on Visual Reasoning. Previously, he has researched on deep semi-supervised learning methods and built machine learning models for location detection at Google Maps and fraud detection at Uber.

Dima