[Lisa_seminaires] Fwd: Marc Bellemare talk Wed Apr 12, 10:30am, MC103

Yoshua Bengio yoshua.umontreal at gmail.com
Lun 10 Avr 15:46:51 EDT 2017


---------- Forwarded message ----------
From: Doina Precup <dprecup at cs.mcgill.ca>
Date: 2017-04-10 15:32 GMT-04:00
Subject: Marc Bellemare talk Wed Apr 12, 10:30am, MC103
To: labrl at cs.mcgill.ca
Cc: "Marc G. Bellemare" <marcgb at gmail.com>, Gen Fried <
genevieve.fried at mail.mcgill.ca>, Yoshua Bengio <yoshua.umontreal at gmail.com>


Hi everyone,

Marc Bellemare, who has made great contributions to the theory and practice
of reinforcement learning (including the ALE environment that we all use &
love) will be visiting us Wed. The talk is at 10:30am, MC103. If you want
to meet with Marc, please send email to Gen (cc’d), he will be around for
the day.

Title: The role of density models in reinforcement learning

Abstract: Much of the theoretical foundations of reinforcement learning
assume, or derive from, a tabular representation. In practical
applications, however, the tabular representation is usually impractical
and undesirable. The translation to practice therefore typically involves a
regression step: a projection of the value function onto a tractable
function class, for example a deep network. In performing this regression,
we often lose many of the appealing properties of tabular representations,
including measures of value uncertainty or the ability to learn from a few
examples. In this talk I will argue that a particular kind of probabilistic
generative models, density models, allow us to recover the benefits of the
tabular representation, without sacrificing generalization. I will first
revisit the Compress and Control approach, which uses density models (or in
fact any sequential data compression algorithm) to model the value
function. As an example, I will demonstrate an agent that learns to play
Pong in 13 games. I will subsequently present our recent work on
pseudo-counts, showing how to induce intrinsically motivated behaviour from
a simple density model, and how this behaviour leads to state-of-the-art
exploration in one of the hardest Atari 2600 games, Montezuma's Revenge.

Short bio

Marc G. Bellemare received his MSc from McGill University and Ph.D. from
the University of Alberta, where he investigated the concept of
domain-independent agents and led the design of the highly-successful
Arcade Learning Environment. His research interests include reinforcement
learning, online learning, information theory, lifelong learning, and
randomized algorithms. He is currently a Senior Research Scientist at
DeepMind.

http://www.marcgbellemare.info/static/index.html

Best,
Doina
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20170410/c5ae1627/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires