[Lisa_teatalk] [Lisa_labo] Talk on Friday: Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation

Martin Arjovsky martinarjovsky at gmail.com
Mon Jan 11 12:44:38 EST 2016


Hi, quick question. Is there any chance the tea talks (and maybe the
discussions) can be put in a hangout and/or recorded? It would be pretty
awesome.

Best!
Martin
On Jan 11, 2016 9:30 AM, "Jörg Bornschein" <bornj at iro.umontreal.ca> wrote:

> Hi everyone,
>
> I'd like to announce the first Tea Talk in 2016:
> (BTW, Happy New Year, to those I didn't talk to yet!)
>
> Sina Honari will talk about "Recombinator Networks: Learning
> Coarse-to-Fine Feature Aggregation"
>
> --
> When: Fri.,  Jan. 15th 2016, 14:30
> Who: Sina Honari
> Where: AA3195  (to be confirmed)
> Link: http://arxiv.org/abs/1511.07356
>
> Abstract:
>
> Deep neural networks with alternating convolutional, max-pooling and
> decimation layers are widely used in state of the art architectures for
> computer vision. Max-pooling purposefully discards precise spatial
> information in order to create features that are more robust, and typically
> organized as lower resolution spatial feature maps. On some tasks, such as
> whole-image classification, max-pooling derived features are well suited;
> however, for tasks requiring precise localization, such as pixel level
> prediction and segmentation, max-pooling destroys exactly the information
> required to perform well. Precise localization may be preserved by shallow
> convnets without pooling but at the expense of robustness. Can we have our
> max-pooled multi-layered cake and eat it too? Several papers have proposed
> summation and concatenation based methods for combining upsampled coarse,
> abstract features with finer features to produce robust pixel level
> predictions. Here we introduce another model --- dubbed Recombinator
> Networks --- where coarse features inform finer features early in their
> formation such that finer features can make use of several layers of
> computation in deciding how to use coarse features. The model is trained
> once, end-to-end and performs better than summation-based architectures,
> reducing the error from the previous state of the art on two facial
> keypoint datasets, AFW and AFLW, by 30% and beating the current
> state-of-the-art on 300W without using extra data. We improve performance
> even further by adding a denoising prediction model based on a novel
> convnet formulation.
> --
>
>
> Looking forward to see you there!
>
>
>    j
>
>
>
>
> _______________________________________________
> Lisa_labo mailing list
> Lisa_labo at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20160111/26ad0ab9/attachment.html 


More information about the Lisa_teatalk mailing list