Hi all,

 This week we will have a tea talk where Caglar, Kyunghyun and myself will present the work submitted at AISTATS : Lp units for MLPs.

The talk will try to go beyond what with submitted. We will try to summarize some ideas we have to extend this work, namely my allowing more flexibility in the units. I will also talk about how one can visualize what these units are doing and how that could be useful.

The abstract of the paper:

In this paper we proposed a novel nonlinear unit, which is called as Lp unit, for a multi-layer perceptron (MLP). The proposed Lp unit receives signal from several projections of the layer below and computes the normalized Lp norm. We notice two interesting interpretations of the Lp unit. First, we note that the proposed unit is a generalization of a number of conventional pooling operators such as average, root-mean-square and max pooling widely used in, for instance, convolutional neural networks(CNN), HMAX models and neocognitrons. Furthermore, under certain constraints, the Lp unit is a generalization of the recently proposed maxout unit (Goodfellow et al, 2013) which achieved the state-of-the-art object recognition results on a number of benchmark datasets. Second, we provide a geometrical interpretation of the activation function. Each Lp unit defines a spherical boundary, with its exact shape defined by the order p. We claim that this makes it possible to obtain arbitrarily shaped, curved boundaries more efficiently by combining just a few Lp units of different orders. We empirically evaluate the proposed Lp units on a number of datasets and show that MLPs consisting of the Lp units achieves the state-of-the-art results on a number of benchmark datasets.


Hope to see many of you there,

Razvan