[Lisa_teatalk] Practice talk, Combining Modality Specific Deep Nets for Emotion Recognition in Video

Razvan Pascanu r.pascanu at gmail.com
Fri Nov 29 21:20:15 EST 2013


This is a practice talk that Samira will be giving at ICMI in Sydney,
presenting the results of our team on the EmotiW challenge.

It's a short presentation (20 minutes), and we would like you to be
present and give her feedback.

Title: Combining Modality Specific Deep Neural Networks for Emotion
Recognition in Video
Speaker: Samira Ebrahimi Kahou

Location: AA-1409
Time: Tuesday, December 3rd, 3 p.m.

Abstract of the article:

In this paper we present the techniques used for the University of
Montreal's team submissions to the 2013 Emotion Recognition in the
Wild Challenge. The challenge is to classify the emotions expressed by
the primary human subject in short video clips extracted from feature
length movies. This involves the analysis of video clips of acted scenes
lasting approximately one-two seconds, including the audio track which
may contain human voices as well as background music. Our approach
combines multiple deep neural networks for different data modalities,
including: (1) a deep convolutional neural network for the analysis of
facial expressions within video frames; (2) a deep belief net to capture
audio information; (3) a deep autoencoder to model the spatio-temporal
information produced by the human actions depicted within the entire
scene; and (4) a shallow network architecture focused on extracted
features of the mouth of the primary human subject in the scene. We
discuss each of these techniques, their performance characteristics and
different strategies to aggregate their predictions.

Our best single model was a convolutional neural network trained to
predict emotions from static frames using two large data sets, the
Toronto Face Database and our own set of faces images harvested from
Google image search, followed by a per frame aggregation strategy that
used the challenge training data. This yielded a test set accuracy of
35.58%. Using our best strategy for aggregating our top performing
models into a single predictor we were able to produce an accuracy
of 41.03% on the challenge test set. These compare favorably to the
challenge baseline test set accuracy of 27.56%.

Hope to see many of you there,
Razvan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20131129/7bdb4345/attachment.html 


More information about the Lisa_teatalk mailing list