Hi,
this is a last minute announcement for our tea talk tomorrow:
We will have David Krueger and Zhouhan Lin present their ICLR papers. These are practice talks for their presentations -- so it would be great to have many of you there and to get a lot of constructive feedback.
When: Fri 15th, 14:30 Where: AA3195
Regularizing RNNs by Stabilizing Activations David Krueger, Roland Memisevic http://arxiv.org/abs/1511.08400
We stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing the squared distance between successive hidden states' norms. This penalty term is an effective regularizer for RNNs including LSTMs and IRNNs, improving performance on character-level language modelling and phoneme recognition, and outperforming weight noise and dropout. We set state of the art (17.5% PER) for an RNN on the TIMIT phoneme recognition task, without using beam-search. With this penalty term, IRNN can achieve similar performance to LSTM on language modelling, although adding the penalty term to the LSTM results in superior performance. Our penalty term also prevents the exponential growth of IRNN's activations outside of their training horizon, allowing them to generalize to much longer sequences.
Neural Networks with Few Multiplications Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio http://arxiv.org/abs/1510.03009
For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.
Hope to see you tomorrow,
Jorg
Afficher les réponses par date
Hi all,
I have presented the content to the lab before, so you may not get something new if you were at my previous talk. This time it's a practice talk for ICLR presentation. We hope that we can get your feedback to improve the presentation of our work.
On Thu, Apr 14, 2016 at 2:37 PM, Jörg Bornschein bornj@iro.umontreal.ca wrote:
Hi,
this is a last minute announcement for our tea talk tomorrow:
We will have David Krueger and Zhouhan Lin present their ICLR papers. These are practice talks for their presentations -- so it would be great to have many of you there and to get a lot of constructive feedback.
When: Fri 15th, 14:30 Where: AA3195
Regularizing RNNs by Stabilizing Activations David Krueger, Roland Memisevic http://arxiv.org/abs/1511.08400
We stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing the squared distance between successive hidden states' norms. This penalty term is an effective regularizer for RNNs including LSTMs and IRNNs, improving performance on character-level language modelling and phoneme recognition, and outperforming weight noise and dropout. We set state of the art (17.5% PER) for an RNN on the TIMIT phoneme recognition task, without using beam-search. With this penalty term, IRNN can achieve similar performance to LSTM on language modelling, although adding the penalty term to the LSTM results in superior performance. Our penalty term also prevents the exponential growth of IRNN's activations outside of their training horizon, allowing them to generalize to much longer sequences.
Neural Networks with Few Multiplications Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio http://arxiv.org/abs/1510.03009
For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.
Hope to see you tomorrow,
Jorg
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo