Just a remainder.
On Sun, Mar 9, 2014 at 8:14 PM, Razvan Pascanu r.pascanu@gmail.com wrote:
Hi all,
This Wednesday 13:00 room AA 3195, I will give a tea talk on my work with Guido Montufar, KyungHyun Cho and Yoshua regarding the importance of depth for feedforward neural networks with piece-wise linear activation functions.
This work (split into two papers) was submitted to ICLR and COLT this year.
Ttile: On the number of linear regions of deep neural networks
Abstract: We study the complexity of functions computable by deep feedforward neural networks with piece-wise linear activations in terms of the number of regions of linearity that they have. Deep networks are able to sequentially map portions of each layer's input space to the same output. In this way, deep models compute functions with a compositional structure that is able to re-use pieces of computation exponentially often in terms of their depth. This note investigates the complexity of such compositional maps and contributes new theoretical results regarding the advantage of depth for neural networks with piece-wise linear activation functions.
paper submitted to COLT : http://arxiv.org/abs/1402.1869 paper submitted to ICLR : http://arxiv.org/abs/1312.6098
Hope to see many of you there, Razvan