Slight change of plan for next week's tea-talk. Gregoire will be giving us an overview of his recent work at Microsoft Research. The talk will be held Thursday, Nov 22nd at 3pm (as usual). David (and Razvan's) tea-talk on motion compression will be moved to the week after NIPS (December 13th).
====
An abstract of what has been done this summer with Xiaodong He at Microsoft Research Redmond. There is Sequential Neural Network in this work and I would appreciate the feedback of experts in RNN to know if it's actually new or not. We plan to publish this work soon. Here it goes:
Abstract: Neural Network for Spoken Language Understanding (SLU)
One of the key problems in spoken language understanding is slot-filling/concept extraction. During this summer, we have worked on this problem using neural networks from three aspects. 1) at the frame level, we built a NN based slot classification model. When initialized with pre-trained word embeddings (SENNA), it gives significant improvement over logistic regression models using n-gram features. 2) at the sequence level, we developed a new (?) sequential NN (SeqNN) model, which is extended from the Recurrent NN, to take into account sequential dependencies. When evaluated on the ATIS data set, it outperforms the CRF baseline significantly. 3) we further studied the robustness of the SeqNN model under noisy/adverse condition. We will present the newly created bi-lingual (En-Zh) ATIS database, and discuss the model’s performance under a cross-lingual SLU setting with “Bing translator” noise. We will conclude the presentation with comprehensive experimental results and discussion of future works.