A reminder for tomorrow's seminar by Hari Parthasarathi.
---------- Forwarded message ---------- From: Guillaume Desjardins guillaume.desjardins@gmail.com Date: Mon, May 9, 2011 at 1:50 PM Subject: UdeM-McGill-MITACS machine learning seminar Wed May 11th @ 14h00, AA3195 To: lisa_seminaires@mercure.iro.umontreal.ca
A UdeM-McGill-MITACS machine learning seminar will be held this Wesdnesday, May 11th. The talk given by Hari Parthasarathi, will take place from 14h00-15h00 in the room AA3195 (pavillon Andre-Aisenstadt) at the Université de Montréal. Hope to see you there !
Title: Wordless sounds aka privacy-sensitive audio Speaker: Hari Parthasarathi
Abstract: In this talk, I will discuss a key issue in the ubiquitous capture and analysis of audio, namely, ``privacy''. Some studies suggest that the linguistic message in audio is perhaps the most privacy-sensitive information. Towards this, we have proposed and analyzed robust, audio features having low linguistic information for 2 specific tasks: (a) speech/nonspeech detection - SND (b) speaker diarization.
For the SND task, in addition to reinterpreting classical features in a privacy-sensitive framework, we quantify the abstract notion of privacy. Furthermore, the robustness of these features (matched vs mismatched conditions, near-field vs far-field microphones) is studied in a large dataset of nearly 500 hours.
For the diarization task, besides investigating linear prediction residual, we derive features based on deep neural networks. We benchmark these approaches against the MFCC features in single and multiple distant microphone scenarios. We performed human and automatic speech recognition tests to assess privacy.