Hi,this is a last minute announcement for our tea talk tomorrow:We will have David Krueger and Zhouhan Lin present their ICLR papers. These are practice talks for their presentations -- so it would be great to have many of you there and to get a lot of constructive feedback.When: Fri 15th, 14:30Where: AA3195Regularizing RNNs by Stabilizing ActivationsDavid Krueger, Roland MemisevicWe stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing the squared distance between successive hidden states' norms. This penalty term is an effective regularizer for RNNs including LSTMs and IRNNs, improving performance on character-level language modelling and phoneme recognition, and outperforming weight noise and dropout. We set state of the art (17.5% PER) for an RNN on the TIMIT phoneme recognition task, without using beam-search. With this penalty term, IRNN can achieve similar performance to LSTM on language modelling, although adding the penalty term to the LSTM results in superior performance. Our penalty term also prevents the exponential growth of IRNN's activations outside of their training horizon, allowing them to generalize to much longer sequences.Neural Networks with Few MultiplicationsZhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua BengioFor most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.Hope to see you tomorrow,Jorg
_______________________________________________
Lisa_labo mailing list
Lisa_labo@iro.umontreal.ca
https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo