[Lisa_seminaires] UdeM-McGill-mPrime machine learning seminar Tues. Oct. 25th @ 14h00, location TBD

Guillaume Desjardins guillaume.desjardins at gmail.com
Ven 21 Oct 16:18:56 EDT 2011


A UdeM-McGill-mPrime machine learning seminar will be held this
Tuesday, Oct. 25th. The talk given by Richard Socher, will
take place from 14h00-15h00. The talk will be held at the Université
de Montréal, room number to be confirmed shortly. Hope to see you there !

Title: Recursive Deep Learning in Natural Language Processing and Computer
Vision

Abstract:

Hierarchical and recursive structure is commonly found in different
modalities, including natural language sentences and scene images.  I
will present some of our recent work on three recursive neural network
architectures that learn meaning representations for such hierarchical
structure. These models obtain state-of-the-art performance on several
language and vision tasks.

The meaning of phrases and sentences is determined by the meanings of
its words and the rules of compositionality. We introduce a recursive
neural network (RNN) for syntactic parsing which can learn vector
representations that capture both syntactic and semantic information
of phrases and sentences. For instance, the phrases "declined to
comment" and "would not disclose" have similar representations.
Since our RNN does not depend on specific assumptions for language, it
can also be used to find hierarchical structure in complex scene
images. This algorithm obtains state-of-the-art performance for
semantic scene segmentation on the Stanford Background and the MSRC
datasets and outperforms Gist descriptors for scene classification by 4%.

The ability to identify sentiments about personal experiences,
products, movies etc. is crucial to understand user generated content
in social networks, blogs or product reviews. The second architecture
I will talk about is based on recursive autoencoders (RAE). RAEs learn
vector representations for phrases sufficiently well as to outperform
other traditional supervised sentiment classification methods on
several standard datasets.
We also show that without supervision RAEs can learn features which
outperform previous approaches for paraphrase detection on the
Microsoft Research Paraphrase corpus.

This talk presents joint work with Andrew Ng and Chris Manning.
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: https://webmail.iro.umontreal.ca/mailman/private/lisa_seminaires/attachments/20111021/1c0694f1/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires