Dear all,
Dr. John Hershey (Mistubishi Electric Research Labs, US) will tell us about
the connection between model-free approaches (e.g. deep neural networks)
and model-based approaches (e.g. probabilistic graphical models) and how we
can utilize this connection. His talk will start from 15.00 *today* (21
Nov) at the usual place AA3195.
Hope to see many of you there!
- Cho
===
- Speaker: Dr. John Hershey (Mistubishi Electric Research Labs)
- Date/Time: 15.00 - 16.00, 21 Nov
- Place: AA3195
- Title: Deep Unfolding: Infusing Deep Architectures with Generative Model
Inference
- Abstract:
Model-based methods and deep neural networks have both been tremendously
successful paradigms in machine learning. In model-based methods, problem
domain knowledge can be built into the constraints of the model, typically
at the expense of difficulties during inference. In contrast, deterministic
deep neural networks are constructed in such a way that inference is
straightforward, but their architectures are rather generic and it can be
unclear how to incorporate problem domain knowledge. This work aims to
obtain the advantages of both approaches. To do so, we start with a
model-based approach and unfold the iterations of its inference method to
form a layer-wise structure. We then decouple the model parameters across
layers to increase the network's learning capacity. This results in novel
neural-network-like architectures that incorporate our model-based
constraints, but can be trained discriminatively to perform fast and
accurate inference. We show how this framework can be applied to a
non-negative matrix factorization model to obtain a new kind of
non-negative deep neural network, that can be trained using a
multiplicative backpropagation-style update algorithm. We present speech
enhancement experiments showing that our approach is competitive with
conventional neural networks despite using far fewer parameters.
- Bio:
John Hershey is a researcher at Mistubishi Electric Research Labs (MERL),
in Cambridge, MA since 2010. Prior to that, John spent 5 years as a
researcher at IBM's Watson Research Center in New York, in the Speech
Algorithms and Engines group, and one year as visiting researcher in the
speech group at Microsoft Research, in Redmond, WA. His obtained his
Ph.D. at the University of California, San Diego. He is currently working
on machine learning for signal enhancement and separation, speech
recognition, language processing, and adaptive user interfaces.