Hi everyone,
this Thursday Kelvin will present
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
which was accepted for this years ICML. Looking forward to see you there
and to get your feedback.
Location: AA-3195
Time: Thursday, July 2nd, 3:30pm
Abstract
Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe
the content of images. We describe how we can train this model in a
deterministic manner using standard backpropagation techniques and
stochastically by maximizing a variational lower bound. We also show
through visualization how the model is able to automatically learn to fix
its gaze on salient objects while generating the corresponding words in the
output sequence. We validate the use of attention with state-of-the-art
performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Jorg