[Lisa_seminaires] TIME CHANGE: [Tea Talk] Adrià Recasens (MIT) May 25 2018 12:45PM AA3195

Michael Noukhovitch mnoukhov at gmail.com
Jeu 24 Mai 15:25:18 EDT 2018


There has been a *time change *for this talk to *12:45PM*, the room is
still in *AA3195*

Also, if you're interested in meeting with Adrià, he'll be free 2pm - 3pm.
Sign up here:
https://calendar.google.com/calendar/selfsched?sstoken=UVBVTVF0X25Nd09LfGRlZmF1bHR8ZWQ5MmNlYWMxODI4OWVkNmUzNGU3OTE4ZDExMGI0YTk

Sorry for the last minute changes, big shout out to Dima B. for arranging
everything and keeping it running smoothly!

On Wed, May 23, 2018 at 4:21 PM Michael Noukhovitch <mnoukhov at gmail.com>
wrote:

> For our second talk this Friday, we have *Adrià Recasens * from * MIT *on *Friday
> May 25 2018* at *1:30 PM* in room *AA3195*
>
> This talk will be streamed as usual here:
> https://bluejeans.com/809027115/webrtc and Adrià is open to meetings but
> we don't know his schedule after his talk yet, if you're interested in
> meeting with him, shoot me an email
>
> For great research discussions, look no further than this talk!
> Michael
>
> *TITLE* Where are they looking?
>
> *KEYWORDS *computer vision
>
> *ABSTRACT*
> Humans have the remarkable ability to follow the gaze of other people to
> identify what they are looking at. Following eye gaze, or gaze-following,
> is an important ability that allows us to understand what other people are
> thinking, the actions they are performing, and even predict what they might
> do next. Despite the importance of this topic, this problem has only been
> studied in limited scenarios within the computer vision community. In this
> talk I will present a deep neural network-based approach for
> gaze-following. Given an image and the location of a head, our approach
> follows the gaze of the person and identifies the object being looked at.
> Furthermore, I will also introduce GazeNet, a deep neural-network to
> predict the 3D direction of a person's gaze from the full 360 degrees. To
> complement GazeNet, I will present a novel saliency-based sampling layer
> for neural networks, the Saliency Sampler, which helps to improve the
> spatial sampling of input data for an arbitrary task. Our differentiable
> layer can be added as a preprocessing block to existing task networks and
> trained altogether in an end-to-end fashion. The effect of the layer is to
> efficiently estimate how to sample from the original data in order to boost
> task performance. For example, for the gaze-tracking task in which the
> original data might range in size up to several megapixels, but where the
> desired input images to the task network are much smaller, our layer learns
> how best to sample from the underlying high resolution data in a manner
> which preserves task-relevant information better than uniform downsampling.
>
> *BIO*
> You can read his bio here: https://people.csail.mit.edu/recasens/
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180524/7f9029ba/attachment-0001.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires