Reminder: this is in 1 hour!
On Mon, Nov 27, 2017, 08:29 Michael Noukhovitch, mnoukhov@gmail.com wrote:
This week we have a researcher from MS Maluuba, *Romain Laroche,* giving a talk on *Friday Dec 1* at* 10:30AM* in room *AA6214*.
See you there, this RL talk is sure to be rewarding! Michael
*KEYWORDS* policy-based RL, bootstrapping, data/computational efficiency
*TITLE *Safe Policy Improvement with Baseline Bootstrapping
*ABSTRACT*
A common goal in Reinforcement Learning is to derive a good strategy given a limited batch of data. In this paper, we propose a new strategy to compute a safe policy, guaranteed to perform at least as well as a given baseline strategy. We advocate that the assumptions made in previous work are too strong for real world applications and propose new algorithms allowing those assumptions to be satisfied only in a subset of the state-action pairs. While significantly relaxing the assumptions, our algorithms achieve the same accuracy guarantees than the previous work, and are also much more computationally efficient. We also show that the algorithms can be adapted to model-free Reinforcement Learning.
*BIO*
Graduated from Ecole Polytechnique in 2001 and then from Telecom PariTech in 2003, Romain Laroche joined the dialogue team at Orange in Paris, where he defended in 2010 a corporate PhD on Reinforcement Learning for industrial dialogue systems at Université Pierre et Marie Curie (Paris VI).
Romain joined Maluuba in 2016, and is now a researcher at Microsoft Research Maluuba. During the past 6 years he supervised 5 PhDs, 3 postdocs, and a dozen of undergrad interns. Counting more than 40 papers at international conferences, his interest focuses now on Reinforcement Learning. His preferred application domains are dialogue systems (still his primary real-world motivation), Atari games (for benchmarking against other algorithms), and navigation toy problems (for empirical analysis and algorithm design).