[Lisa_seminaires] UdeM-McGill-mPrime machine learning seminar Wed. Oct. 26th @ 14h00, AA3195

Guillaume Desjardins guillaume.desjardins at gmail.com
Mer 26 Oct 08:29:38 EDT 2011


A reminder for today's talk by Mikolov Tomáš. I had mistakenly
advertised this as a mPrime talk, but it is in fact a tea-talk. It is
still open to everyone however.

By request, the talk will again be broadcasted using Google+ Hangouts.
Make sure to add the user lisa.umontreal at gmail.com to your circles to
get access to the talk. This will again be a trial run, so expect a
few technical hiccups.

See you there !

On Fri, Oct 21, 2011 at 4:25 PM, Guillaume Desjardins
<guillaume.desjardins at gmail.com> wrote:
> In a back-to-back special, a second UdeM-McGill-mPrime machine learning
> seminar will also be held on Wednesday, Oct. 26th. The talk given by Mikolov
> Tomáš, will
> take place from 14h00-15h00. The talk will be held at the Université
> de Montréal, room number to be confirmed shortly. Hope to see you there !
>
> Title: Language modeling with recurrent neural networks
> Abstract:
> Statistical language models are important part of almost any speech
> recognition system today. The most basic but also the most successful models
> so far are based on n-gram statistics. Comparison of performance of
> different language modeling techniques on different tasks will be presented.
> Among all, neural network based language models perform the best. Next,
> useful extensions of the basic neural network model as well as different
> architectures will be discussed, such as recurrent neural network
> architecture, classes in the output layer and joint training with a maximum
> entropy model.
> Next, I will present results achieved with a novel RNNME model (recurrent
> neural network trained together with a maximum entropy model), on a
> state-of-the-art setup from IBM for Broadcast News speech recognition (NIST
> RT04). Word error rate reductions over large 4-gram model are over 10%.
> Previously the best language model on this setup, a so-called "model M" from
> IBM (regularized class-based maximum entropy model), provides about 5%
> reduction of WER over 4-gram model.
> Finally, I will talk about character-level and subword-level language
> modeling experiments with different models, including a recently proposed
> RNN model trained with a new Hessian-Free optimizer. These models can assign
> meaningful probability to any words, and can be considered as a solution to
> well known problems with infinite vocabularies (OOV problems). Moreover,
> their size is significantly lower than of standard models.
> This talk presents joint work with Anoop Deoras, Ilya Sutskever, Stefan
> Kombrink and Hai Son Le.
>


Plus d'informations sur la liste de diffusion Lisa_seminaires