This week we have *Abhishek Das * from * Georgia Tech (and currently interning at FAIR Montreal) * giving a talk on *August 17 2018* at *10:30* in room *AA3195*
Will this talk be streamed https://mila.bluejeans.com/809027115/webrtc? yes Want to meet the speaker? Sign up here https://calendar.google.com/calendar/selfsched?sstoken=UVBVTVF0X25Nd09LfGRlZmF1bHR8ZWQ5MmNlYWMxODI4OWVkNmUzNGU3OTE4ZDExMGI0YTk ☕ talk ->🚶to 3195 Michael
*TITLE* Connecting Vision and Language to Actions
*KEYWORDS *language understanding, VQA, embodied agents
*ABSTRACT* Building intelligent agents that possess the ability to perceive the rich visual environment around us, communicate this understanding in natural language to humans and other agents, and execute actions in a physical environment, is a long-term goal of Artificial Intelligence. In this talk, I will present some of my recent work at various points on this spectrum in connecting vision and language to actions; from Visual Dialog (CVPR17, ICCV17, HCOMP17) -- where we develop models capable of holding free-form visually-grounded natural language conversation towards a downstream goal and ways to evaluate them -- to Embodied Question Answering (CVPR18) -- where we augment these models to actively navigate in simulated environments and gather visual information necessary for answering questions.
*BIO* Abhishek Das is a Computer Science PhD student at Georgia Institute of Technology, advised by Dhruv Batra, and working closely with Devi Parikh. He is interested in deep learning and its applications in building agents that can see (computer vision), talk (language modeling), act (reinforcement learning), and reason. He is a recipient of an Adobe Research Fellowship and a Snap Research Fellowship. He has held internship positions at Queensland Brain Institute (Winter 2013, Winter 2014), Virginia Tech (Fall 2015) and Facebook AI Research (Summer 2017, Winter 2018, Summer 2018). He graduated from Indian Institute of Technology Roorkee in 2015 with a Bachelor's degree in Electrical Engineering.
Afficher les réponses par date
For those wishing to meet with Abhishek:
It seems that he will only be available until 1:30pm, so if you're keen to meet, there will be a group that will go out to lunch together after the talk. Please stay after the talk and join!
On Mon, Aug 13, 2018 at 4:54 PM Michael Noukhovitch mnoukhov@gmail.com wrote:
This week we have *Abhishek Das * from * Georgia Tech (and currently interning at FAIR Montreal) * giving a talk on *August 17 2018* at *10:30* in room *AA3195*
Will this talk be streamed https://mila.bluejeans.com/809027115/webrtc? yes Want to meet the speaker? Sign up here https://calendar.google.com/calendar/selfsched?sstoken=UVBVTVF0X25Nd09LfGRlZmF1bHR8ZWQ5MmNlYWMxODI4OWVkNmUzNGU3OTE4ZDExMGI0YTk ☕ talk ->🚶to 3195 Michael
*TITLE* Connecting Vision and Language to Actions
*KEYWORDS *language understanding, VQA, embodied agents
*ABSTRACT* Building intelligent agents that possess the ability to perceive the rich visual environment around us, communicate this understanding in natural language to humans and other agents, and execute actions in a physical environment, is a long-term goal of Artificial Intelligence. In this talk, I will present some of my recent work at various points on this spectrum in connecting vision and language to actions; from Visual Dialog (CVPR17, ICCV17, HCOMP17) -- where we develop models capable of holding free-form visually-grounded natural language conversation towards a downstream goal and ways to evaluate them -- to Embodied Question Answering (CVPR18) -- where we augment these models to actively navigate in simulated environments and gather visual information necessary for answering questions.
*BIO* Abhishek Das is a Computer Science PhD student at Georgia Institute of Technology, advised by Dhruv Batra, and working closely with Devi Parikh. He is interested in deep learning and its applications in building agents that can see (computer vision), talk (language modeling), act (reinforcement learning), and reason. He is a recipient of an Adobe Research Fellowship and a Snap Research Fellowship. He has held internship positions at Queensland Brain Institute (Winter 2013, Winter 2014), Virginia Tech (Fall 2015) and Facebook AI Research (Summer 2017, Winter 2018, Summer 2018). He graduated from Indian Institute of Technology Roorkee in 2015 with a Bachelor's degree in Electrical Engineering.
sorry, change in streaming link: https://mila.bluejeans.com/9010777275/webrtc
On Tue, Aug 14, 2018 at 11:06 AM Michael Noukhovitch mnoukhov@gmail.com wrote:
For those wishing to meet with Abhishek:
It seems that he will only be available until 1:30pm, so if you're keen to meet, there will be a group that will go out to lunch together after the talk. Please stay after the talk and join!
On Mon, Aug 13, 2018 at 4:54 PM Michael Noukhovitch mnoukhov@gmail.com wrote:
This week we have *Abhishek Das * from * Georgia Tech (and currently interning at FAIR Montreal) * giving a talk on *August 17 2018* at *10:30* in room *AA3195*
Will this talk be streamed https://mila.bluejeans.com/809027115/webrtc? yes Want to meet the speaker? Sign up here https://calendar.google.com/calendar/selfsched?sstoken=UVBVTVF0X25Nd09LfGRlZmF1bHR8ZWQ5MmNlYWMxODI4OWVkNmUzNGU3OTE4ZDExMGI0YTk ☕ talk ->🚶to 3195 Michael
*TITLE* Connecting Vision and Language to Actions
*KEYWORDS *language understanding, VQA, embodied agents
*ABSTRACT* Building intelligent agents that possess the ability to perceive the rich visual environment around us, communicate this understanding in natural language to humans and other agents, and execute actions in a physical environment, is a long-term goal of Artificial Intelligence. In this talk, I will present some of my recent work at various points on this spectrum in connecting vision and language to actions; from Visual Dialog (CVPR17, ICCV17, HCOMP17) -- where we develop models capable of holding free-form visually-grounded natural language conversation towards a downstream goal and ways to evaluate them -- to Embodied Question Answering (CVPR18) -- where we augment these models to actively navigate in simulated environments and gather visual information necessary for answering questions.
*BIO* Abhishek Das is a Computer Science PhD student at Georgia Institute of Technology, advised by Dhruv Batra, and working closely with Devi Parikh. He is interested in deep learning and its applications in building agents that can see (computer vision), talk (language modeling), act (reinforcement learning), and reason. He is a recipient of an Adobe Research Fellowship and a Snap Research Fellowship. He has held internship positions at Queensland Brain Institute (Winter 2013, Winter 2014), Virginia Tech (Fall 2015) and Facebook AI Research (Summer 2017, Winter 2018, Summer 2018). He graduated from Indian Institute of Technology Roorkee in 2015 with a Bachelor's degree in Electrical Engineering.
lisa_seminaires@iro.umontreal.ca