[Lisa_teatalk] Tea talk this week Friday 3PM. room TBD [note change of speaker]

Razvan Pascanu r.pascanu at gmail.com
Wed Nov 13 20:56:15 EST 2013


Hi everyone,

 We've change the speaker for this tea talk. Hal is going to give a talk
instead. The room remains TBD, but same time 15:00 Friday.

Hope to see many of you there.
 Thanks.


title: A picture is worth 13.6 words (on average)

abstract: I'll discuss some recent work I've been involved in that ties
language processing (which I know something about) and computer vision
(which I do not). Technically I'll focus on problems related to caption
generation. I'll highlight some of the challenges, discuss some successes
(and more failures) and discuss what I think the interesting problems are
moving forward. This talk is very informal, and feedback his highly
appreciated! This will include ideas and results that evolved from joint
work with Yiannis Aloimonos, Alex Berg, Tamara Berg, Jesse Dodge, Aleks
Ecins, Cornelia Fermuller, Amit Goyal, Xufeng Han, Alyssa Mensch, Margaret
Mitchell, Karl Stratos, Ching Lik Teo, Kota Yamaguchi and Yezhou Yang.



On Wed, Nov 13, 2013 at 9:18 AM, Razvan Pascanu <r.pascanu at gmail.com> wrote:

> Hi all,
>
>  This week we will have a tea talk where Caglar, Kyunghyun and myself will
> present the work submitted at AISTATS : Lp units for MLPs.
>
> The talk will try to go beyond what with submitted. We will try to
> summarize some ideas we have to extend this work, namely my allowing more
> flexibility in the units. I will also talk about how one can visualize what
> these units are doing and how that could be useful.
>
> The abstract of the paper:
>
> In this paper we proposed a novel nonlinear unit, which is called as Lp
> unit, for a multi-layer perceptron (MLP). The proposed Lp unit receives
> signal from several projections of the layer below and computes the
> normalized Lp norm. We notice two interesting interpretations of the Lp
> unit. First, we note that the proposed unit is a generalization of a number
> of conventional pooling operators such as average, root-mean-square and max
> pooling widely used in, for instance, convolutional neural networks(CNN),
> HMAX models and neocognitrons. Furthermore, under certain constraints, the
> Lp unit is a generalization of the recently proposed maxout unit
> (Goodfellow et al, 2013) which achieved the state-of-the-art object
> recognition results on a number of benchmark datasets. Second, we provide a
> geometrical interpretation of the activation function. Each Lp unit defines
> a spherical boundary, with its exact shape defined by the order p. We claim
> that this makes it possible to obtain arbitrarily shaped, curved boundaries
> more efficiently by combining just a few Lp units of different orders. We
> empirically evaluate the proposed Lp units on a number of datasets and show
> that MLPs consisting of the Lp units achieves the state-of-the-art results
> on a number of benchmark datasets.
>
>
> Hope to see many of you there,
>
> Razvan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20131113/3c61d8f1/attachment.html 


More information about the Lisa_teatalk mailing list