Reminder this is in 20 minutes!

On Tue, Mar 13, 2018 at 12:24 PM Michael Noukhovitch <mnoukhov@gmail.com> wrote:
This week we are lucky to have Alexander Vezhnevets, research scientist at Deepmind, giving a talk on Friday March 16 at 10:30AM in room AA1360.

If you want to meet with Alexander in the afternoon, add your name and the times you're free here: https://doodle.com/poll/6bbzqikmtx5nuktv

Navigate your way, by walking and taking the elevator, specfically moving towards the elevator, taking the elevator down, exit the elevator, and making your way to room 1360 to come to this funtastic talk!
Michael

TITLE What we want from (H)RL and other FuN topics

KEYWORDS Hierarchical Learning, RL, Compositional Learning

ABSTRACT
Deep Reinforcement learning is making headlines. Superhuman at Go and Space Invaders, possibly Starcraft next! Yet most of the state of the art, human beating architectures are reactive and data hungry. They don’t possess transferable skills, can’t break complex tasks into sub-task, plan into the future or find more than one solution to a problem. Hierarchical reinforcement learning is an area of research that aims to build agents with complex, structured behaviour by endowing them with all of these desired properties of intelligence.

In this talk we will review the recent progress in HRL and discuss one model - FeUdal Networks - in more details. FeUdal Networks (FuN) is a neural network architecture, which learns to decompose its behaviour into meaningful primitives and then reuse them to more efficiently acquire new, complex behaviours. This allows it to reason on different temporal resolutions and thereby improve long-term credit assignment and memory.

BIO
Alexander Vezhnevets is a research scientist at DeepMind working on hierarchical RL. Originally from Moscow, he got his PhD in Machine Learning from ETH Zurich, where he worked with Joachim Buhmann on structured output learning. He then spent two years in lovely Edinburgh working on computer vision with Vittorio Ferrari.