Greeting,
After a week hiatus, we are back with a regularly scheduled tea talk. This week Xavier Glorot will talk about some of his recent work with Yoshua. Hope to see you all there.
Date and Time: Thursday August 5th, 15h00 Location: LISA lab (AA3256)
title: Deep Sparse Rectifier Neural Networks
abstract: For multi-layer neural networks, the rectifier activation function, f(x)=max(0, x), is more consistent to neuroscience observations than the sigmoid or the hyperbolic tangent. Firstly, in addition to a L1 regularization on the activations, it creates sparse representations with exact zeros. Secondly, neurons in the cortex seem to work in a linear regime. In the context of gradient-based optimization and representation efficiency, direct mathematical advantages arise from those properties. However, potential intuitive problems remains: non-differentiability at 0, hard non-linearity, ill-conditionning, unboundedness. We tested this activation function, and variants, on several image classification datasets. We show that networks of rectifying neurons yield significantly better accuracy than hyperbolic tangent networks.
Cheers, Aaron
Afficher les réponses par date