Hi all,

This is just a reminder, that we have a tea-talk today, at 13:30.

Dima

On Thu, 2 Mar 2017 at 09:44 Dzmitry Bahdanau <dimabgv@gmail.com> wrote:
Hi all,

An update: we will have two presentations, not just one! Our second speaker will be Kundan Kumar who is currently a visiting student in MILA. Please find more information below:

Title 2: Sample RNN: An Unconditional End-to-End Neural Audio Generation Model

Abstract: In this paper, we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.

Dima


On Wed, 1 Mar 2017 at 11:40 Dzmitry Bahdanau <dimabgv@gmail.com> wrote:

Hi all,

Our next speaker is Zhouhan Lin, a PhD student from MILA. The time and place are usual: March 3, at 1:30pm, room AA6214. Home to see many of you there!

Title: A Structured Self-Attentive Sentence Embedding

Abstract: This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks

Dima