[Lisa_seminaires] Pre-ISMIR09 talks: Tomorrow(!), October 12th @ 12h30-13h30 in AA 3195

Guillaume Desjardins guillaume.desjardins at gmail.com
Lun 12 Oct 21:57:33 EDT 2009


I hope everyone had a good thanksgiving, filled with family, lots of
food (of your choice) and gravy.
Unfortunately, I just returned to Montreal only to realize that the
official announcement for tomorrow's talks was never sent ! (thanks
for the heads up Isabelle). I hope some of you got wind of it by
browsing Lisa's Google calendar.

The talks will be given by Philippe Hamel and François Maillet, and
are meant to capture material which will be presented at this year's
International Society for Music Information Retrieval (ISMIR)
conference, held in Kobe, Japan. I apologize for the late notice and
hope many of you can attend !

-------------
Speaker: Philippe Hamel
Title: Automatic identification of instrument classes in polyphonic
and poly-instrument audio
-------------
We present and compare several models for automatic identification of
instrument classes in polyphonic and poly-instrument audio. The goal
is to be able to identify which categories of instrument (Strings,
Woodwind, Guitar, Piano, etc.) are present in a given audio example.
We use a machine learning approach to solve this task. We constructed
a system to generate a large database of musically relevant
poly-instrument audio. Our database is generated from hundreds of
instruments classified in 7 categories. Musical audio examples are
generated by mixing multi-track MIDI files with thousands of
instrument combinations. We compare three different classifiers: a
Support Vector Machine (SVM), a Multilayer Perceptron (MLP) and a Deep
Belief Network (DBN). We show that the DBN tends to outperform both
the SVM and the MLP in most cases.

-------------
Speaker: Francois Maillet
Title: Steerable Playlist Generation by Learning Song Similarity from
Radio Station Playlists
-------------
This paper presents an approach to generating steerable playlists. We
first demonstrate a method for learning song transition probabilities
from audio features extracted from songs played in professional radio
station playlists. We then show that by using this learnt similarity
function as a prior, we are able to generate steerable playlists by
choosing the next song to play not simply based on that prior, but on
a tag cloud that the user is able to manipulate to express the
high-level characteristics of the music he wishes to listen to.


--
Guillaume Desjardins


Plus d'informations sur la liste de diffusion Lisa_seminaires