[Lisa_teatalk] Fri 15th: ICLR practice talks (tomorrow!)

Jörg Bornschein bornj at iro.umontreal.ca
Thu Apr 14 14:37:32 EDT 2016


Hi,

this is a last minute announcement for our tea talk tomorrow:

We will have David Krueger and Zhouhan Lin present their ICLR papers. These
are practice talks for their presentations -- so it would be great to have
many of  you there and to get a lot of constructive feedback.

When: Fri 15th, 14:30
Where: AA3195

Regularizing RNNs by Stabilizing Activations
David Krueger, Roland Memisevic
http://arxiv.org/abs/1511.08400

We stabilize the activations of Recurrent Neural Networks (RNNs) by
penalizing the squared distance between successive hidden states' norms.
This penalty term is an effective regularizer for RNNs including LSTMs and
IRNNs, improving performance on character-level language modelling and
phoneme recognition, and outperforming weight noise and dropout. We set
state of the art (17.5% PER) for an RNN on the TIMIT phoneme recognition
task, without using beam-search. With this penalty term, IRNN can achieve
similar performance to LSTM on language modelling, although adding the
penalty term to the LSTM results in superior performance. Our penalty term
also prevents the exponential growth of IRNN's activations outside of their
training horizon, allowing them to generalize to much longer sequences.



Neural Networks with Few Multiplications
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio
http://arxiv.org/abs/1510.03009

For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically
spent on floating point multiplications, we investigate an approach to
training that eliminates the need for most of these. Our method consists of
two parts: First we stochastically binarize weights to convert
multiplications involved in computing hidden states to sign changes.
Second, while back-propagating error derivatives, in addition to binarizing
the weights, we quantize the representations at each layer to convert the
remaining multiplications into binary shifts. Experimental results across 3
popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only
does not hurt classification performance but can result in even better
performance than standard stochastic gradient descent training, paving the
way to fast, hardware-friendly training of neural networks.


Hope to see you tomorrow,

   Jorg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20160414/4960cc57/attachment.html 


More information about the Lisa_teatalk mailing list