Hi everyone,
this Thursday Kelvin will present
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
which was accepted for this years ICML. Looking forward to see you there and to get your feedback.
Location: AA-3195 Time: Thursday, July 2nd, 3:30pm
Abstract
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Jorg
Afficher les réponses par date
Hi all,
For those who are not acquainted with this tradition, we often do these practice talks, which give an opportunity to a presenter to practice in front of a friendly audience, and *more importantly* get critical and constructive feedback in order to improve the presentation. For a 20 min talk we often spend a whole hour discussing ways to improve things. Such talks, especially at major conferences like ICML, are important for the visibility of the lab to the outside world!
Your participation in this event is thus greatly appreciated!
See you on Thursday, 3:30pm.
-- Yoshua
On Tue, Jun 30, 2015 at 1:03 PM, Jörg Bornschein bornj@iro.umontreal.ca wrote:
Hi everyone,
this Thursday Kelvin will present
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
which was accepted for this years ICML. Looking forward to see you there and to get your feedback.
Location: AA-3195 Time: Thursday, July 2nd, 3:30pm
Abstract
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Jorg
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
This is the reminder for the talk today.
Cheers,
Thank you again to everyone who came to the practice talk. It was very useful for me and I really appreciated the number of thoughtful comments.
Best, -- Kelvin
On Thu, Jul 2, 2015 at 1:28 PM, Laurent Dinh dinh.laurent@gmail.com wrote:
This is the reminder for the talk today.
Cheers,
Laurent
Le mardi 30 juin 2015, Yoshua Bengio yoshua.bengio@gmail.com a écrit :
Hi all,
For those who are not acquainted with this tradition, we often do these practice talks, which give an opportunity to a presenter to practice in front of a friendly audience, and *more importantly* get critical and constructive feedback in order to improve the presentation. For a 20 min talk we often spend a whole hour discussing ways to improve things. Such talks, especially at major conferences like ICML, are important for the visibility of the lab to the outside world!
Your participation in this event is thus greatly appreciated!
See you on Thursday, 3:30pm.
-- Yoshua
On Tue, Jun 30, 2015 at 1:03 PM, Jörg Bornschein bornj@iro.umontreal.ca wrote:
Hi everyone,
this Thursday Kelvin will present
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
which was accepted for this years ICML. Looking forward to see you there and to get your feedback.
Location: AA-3195 Time: Thursday, July 2nd, 3:30pm
Abstract
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Jorg
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo