I forward this as there is an email issue.
Fred ---------- Forwarded message ---------- From: Kyung Hyun Cho cho.k.hyun@gmail.com Date: Fri, Nov 21, 2014 at 11:38 AM Subject: Re: [Lisa_labo] Talk 21 Nov Fri @15.00 AA3195 by Dr. John Hershey To: lisa_labo Labo lisa_labo@iro.umontreal.ca, " lisa_teatalk@iro.umontreal.ca" lisa_teatalk@iro.umontreal.ca, lisa_seminaires@iro.umontreal.ca Cc: John Hershey hershey@merl.com
Dear all,
It's a reminder that we have a talk by Dr. John Hershey *today* at 15.00.
Hope to see many of you there! - K
On Sun, Nov 16, 2014 at 6:00 PM, Kyung Hyun Cho cho.k.hyun@gmail.com wrote:
Dear all,
Dr. John Hershey (Mistubishi Electric Research Labs, US) will tell us about the connection between model-free approaches (e.g. deep neural networks) and model-based approaches (e.g. probabilistic graphical models) and how we can utilize this connection. His talk will start from 15.00 next Friday (21 Nov) at the usual place AA3195.
Hope to see many of you there!
- Cho
===
- Speaker: Dr. John Hershey (Mistubishi Electric Research Labs)
- Date/Time: 15.00 - 16.00, 21 Nov
- Place: AA3195
- Title: Deep Unfolding: Infusing Deep Architectures with Generative
Model Inference
- Abstract:
Model-based methods and deep neural networks have both been tremendously successful paradigms in machine learning. In model-based methods, problem domain knowledge can be built into the constraints of the model, typically at the expense of difficulties during inference. In contrast, deterministic deep neural networks are constructed in such a way that inference is straightforward, but their architectures are rather generic and it can be unclear how to incorporate problem domain knowledge. This work aims to obtain the advantages of both approaches. To do so, we start with a model-based approach and unfold the iterations of its inference method to form a layer-wise structure. We then decouple the model parameters across layers to increase the network's learning capacity. This results in novel neural-network-like architectures that incorporate our model-based constraints, but can be trained discriminatively to perform fast and accurate inference. We show how this framework can be applied to a non-negative matrix factorization model to obtain a new kind of non-negative deep neural network, that can be trained using a multiplicative backpropagation-style update algorithm. We present speech enhancement experiments showing that our approach is competitive with conventional neural networks despite using far fewer parameters.
- Bio:
John Hershey is a researcher at Mistubishi Electric Research Labs (MERL), in Cambridge, MA since 2010. Prior to that, John spent 5 years as a researcher at IBM's Watson Research Center in New York, in the Speech Algorithms and Engines group, and one year as visiting researcher in the speech group at Microsoft Research, in Redmond, WA. His obtained his Ph.D. at the University of California, San Diego. He is currently working on machine learning for signal enhancement and separation, speech recognition, language processing, and adaptive user interfaces.
_______________________________________________ Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Afficher les réponses par date
lisa_seminaires@iro.umontreal.ca