[Lisa_seminaires] [Double Tea Talk] Marc Bellemare (Brain) x Pieter Abbeel (UC Berkeley) Fri 24 May 10h30 Mila Auditorium

Rim Assouel rim.assouel at gmail.com
Lun 6 Mai 17:06:04 EDT 2019


On May the 24th we will have a special double tea talk (40min each) featuring Marc G. Bellemare and Pieter Abbeel from 10.30AM  to 12PM at Mila Auditorium!

We send the notification email early enough so that you can all mark your calendars :) 

Those talks will be streamed and recorded here <https://mila.bluejeans.com/4862024040/webrtc> !

See you there, 

The Tea Talk Team :) 

TITLE Some progress on understanding the benefits of distributional reinforcement learning

ABTRACT

In this talk I will review what we now know about distributional reinforcement learning and how its full benefits are only obtained when combined with nonlinear representations such as deep networks. I will discuss how trying to understanding the good empirical performance of distributional RL has led us to all kinds of exciting results regarding representation learning for RL, and in particular a formulation of optimal representation learning based on the geometric notion of a value function polytope.

BIO

Marc G. Bellemare leads the reinforcement learning efforts at Google Brain in Montreal and holds a Canada CIFAR AI Chair at the Quebec Artificial Intelligence Institute (Mila). He received his Ph.D. from the University of Alberta, where he developed the highly-successful Arcade Learning Environment, the platform that sparked the recent revival in deep reinforcement learning research. He joined DeepMind in 2013 prior to its acquisition by Google and was research scientist there until his return to Canada in 2017. During his tenure at DeepMind he made important contributions to deep reinforcement learning, in particular pioneering the distributional method. Marc G. Bellemare is also a CIFAR Learning in Machines & Brains Fellow and an adjunct professor at McGill University.

………….


TITLE Model-based RL via Meta-Model-Free RL

ABSTRACT

Model-free RL has seen great asymptotic successes, but sample complexity tends to be high.  Model-based RL carries the promise of better sample efficiency, and indeed has shown more data efficient learning, but tends to fall well short of model-free RL in terms of asymptotic performance.  In this presentation I will describe a new approach to model-based RL that brings in ideas from domain randomization and meta-model-free RL, resulting in the best of both worlds: fast learning and great asymptotic performance.  Our method is evaluated on several mujoco environments (PR2 reacher, swimmer, hopper, ant, swimmer, walker) and is able to learn lego-block placement on a real robot in 10 minutes.

BIO

Professor Pieter Abbeel is Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab. Abbeel’s research strives to build ever more intelligent systems, which has his lab push the frontiers of deep reinforcement learning, deep imitation learning, deep unsupervised learning, transfer learning, meta-learning, and learning to learn, as well as study the influence of AI on society.  His lab also investigates how AI could advance other science and engineering disciplines.  Abbeel has founded three companies: Gradescope, Covariant, and Berkeley Open Arms, advises many AI and robotics start-ups, and is a frequently sought after speaker worldwide for C-suite sessions on AI future and strategy.  Abbeel has received many awards and honors, including the PECASE, NSF-CAREER, ONR-YIP, Darpa-YFA, TR35.  His work is frequently featured in the press, including the New York Times, Wall Street Journal, BBC, Rolling Stone, Wired, and Tech Review.


-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: <http://mailman.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20190506/18ffec55/attachment.html>


Plus d'informations sur la liste de diffusion Lisa_seminaires