Hey Gang,

This week we have Guillaume Desjardins talking about a recent ICML paper on enhanced gradient methods.

When: Nov 16th 14h00
Where: LISA Lab (AA3256)

Abstract:

In this tea talk, I will present recent work by KyungHyun Cho on
"enhanced gradient" for RBMs. The motivation for this new gradient are
two-fold. First, it is easy to show that the typical maximum
likelihood gradient on the weights is a function of the gradients on
the biases. Second, the RBM is over-parametrized in that multiple
(visible/hidden states, parameter) configurations can lead to the same
energy function. The enhanced gradient addresses both of these
problems by being invariant to these bit-flip transformations. This
results in faster convergence, less "dead" filters and an invariance
to the actual binary representation of the data (e.g. ability to learn
bit-flipped MNIST). We shall also discuss links to the natural
gradient and time allowing, discuss their learning rate adaptation
schedule.


Cheers,
Aaron

--
Aaron C. Courville
Département d’Informatique et
de recherche opérationnelle
Université de Montréal
email:Aaron.Courville@gmail.com