[Lisa_seminaires] UdeM-McGill-MITACS machine learning seminar: Fri August 22nd, 14:00, AA-3195
Dumitru Erhan
dumitru.erhan at umontreal.ca
Jeu 21 Aou 11:14:20 EDT 2008
TOMORROW'S UdeM-McGill-MITACS seminar (see http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en)
:
Differentiable Sparse Coding
by J. Andrew Bagnell
Carnegie Mellon Robotics Institute and Machine Learning Department
Location : Pavillon André-Aisenstadt (UdeM) room 3195
Date and Time : Friday, August 22 2008, 14h00
Sparse approximation is a key technique developed recently in
engineering and the sciences which attempts to approximate an input
signal, denoted here by X, in terms of a “sparse” combination of
fixed bases B. This approach relies on an optimization algorithm to
infer the most probable weights \hat{W} to reconstruct input signals,
given the model X ≈ f (BW). Priors which produce sparse solutions for
W , especially L1 regularization, have gained attention because of
their usefulness in ill-posed engineering problems ranging from
geology to magnetic resonance imaging, their ability to elucidate
certain neuro-biological phenomena, and their ability to condense a
high-dimensional input signal into useful features for classification.
Sparse coding – closely connected to Independent Component Analysis
as well as certain approaches to matrix factorization – extends
sparse approximation by not only performing optimization to compute
the best set of weights for a given input signal, but also learning
bases B which lead to a compact representation of input signals.
Unfortunately, existing sparse coding algorithms that efficiently
infer the latent weight vector are difficult to integrate into larger
learning architectures. It has been convincingly demonstrated that
back-propagation is a crucial tool for tuning an existing generative
model’s performance discriminatively to lead to good supervised
performance. Similarly, greedy layer-wise strategies to building deep
generative models rely upon a back-propagation step to achieve
excellent model performance. Existing sparse coding architectures
produce a latent representation \hat{W} that is an unstable,
discontinuous function of the inputs and bases; an arbitrarily small
change in input can lead to the selection of a completely different
set of latent weights.
We present a new approach to coding with an efficient, convex
inference step based on minimizing KL-divergence. We show this
increased stability leads to better semi-supervised classification
performance. Additionally, although inferring the latent weights
requires an optimization procedure (i.e. is not closed form) we
demonstrate that for a large class of Bregman-divergence based priors
and loss functions, we may use implicit differentiation to efficiently
backpropagate error signals. The sparse coding bases can then be
optimized discriminatively leading to outstanding empirical
performance and enabling sparse coding to form a part of a larger
learning architecture.
Joint work with David M. Bradley
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://webmail.iro.umontreal.ca/mailman/private/lisa_seminaires/attachments/20080821/76423e9f/attachment.html
Plus d'informations sur la liste de diffusion Lisa_seminaires