[Lisa_seminaires] Fwd: UdeM-McGill-MITACS machine learning seminar Tue Oct. 19 at 15h00, AA-3195

Dumitru Erhan erhandum at iro.umontreal.ca
Lun 18 Oct 21:39:34 EDT 2010


Reminder for tomorrow's (Tuesday) seminar

---------- Forwarded message ----------
From: Dumitru Erhan <erhandum at iro.umontreal.ca>
Date: Friday, October 15, 2010
Subject: UdeM-McGill-MITACS machine learning seminar Tue Oct. 19 at 15h00, AA-3195
To: lisa_seminaires at iro.umontreal.ca


The UdeM-McGill-MITACS machine learning seminar series
<http://www.iro.umontreal.ca/article.php3?id_article=107〈=en> is
continuing its fall schedule. Next week's seminar:

Large Scale Image and Music Annotation: Learning to Rank and
Multi-Tasking with Joint Embeddings

by Jason Weston
Google Research, NY

Location: Pavillon André-Aisenstadt (UdeM), room AA-3195
Time: Tuesday, October 19, 15:00

Abstract: In the first part of the talk we will discuss large scale
image annotation. Image annotation datasets are becoming larger and
larger, with tens of millions of images and tens of thousands of
possible annotations. We propose a well performing method that scales
to such datasets by simultaneously learning to optimize precision at k
of the ranked list of annotations for a given image \em and learning a
low-dimensional joint embedding space for both images and annotations.
Our method both outperforms several baseline methods and, in
comparison to them, is faster and consumes less memory. We also
demonstrate how our method learns an interpretable model, where
annotations with alternate spellings or even languages are close in
the embedding space. Hence, even when our model does not predict the
exact annotation given by a human labeler, it often predicts similar
annotations, a fact that we try to quantify by measuring the newly
introduced ``sibling’’ precision metric, where our method also obtains
good results.

In the second (shorter) part of the talk we will discuss large scale
music annotation. Music prediction tasks range from predicting the
genre, style or the artist given a song or clip of audio, predicting
similar artists given an artist, or predicting related songs given a
song, clip, artist name or genre or style tag. That is, we are in
interested in pretty much every semantic relationship between the
different musical concepts in our database. In realistic databases,
the number of songs is measured in the millions, and the number of
artists in the tens of thousands or more, providing a considerable
challenge to standard machine learning techniques. In this work, we
propose a method that scales to such datasets which attempts to
capture the semantic similarities between the database items by
modeling audio, artist names, and genre and style tags in a single
low-dimensional semantic space. This choice of space is learnt by
optimizing the set of predictions tasks of interest jointly using
multi-task learning. Our method both outperforms baseline methods and,
in comparison to them, is faster and consumes less memory. We then
demonstrate how our method learns an interpretable model, where the
semantic space captures well the similarities of interest.

Joint work with Samy Bengio and Nicolas Usunier.



-- 
http://dumitru.ca, +1-514-432-8435


Plus d'informations sur la liste de diffusion Lisa_seminaires