A common goal in Reinforcement Learning is to derive a good strategy given a limited batch of data. In this paper, we propose a new strategy to compute a safe policy, guaranteed to perform at least as well as a given baseline strategy. We advocate that the assumptions made in previous work are too strong for real world applications and propose new algorithms allowing those assumptions to be satisfied only in a subset of the state-action pairs. While significantly relaxing the assumptions, our algorithms achieve the same accuracy guarantees than the previous work, and are also much more computationally efficient. We also show that the algorithms can be adapted to model-free Reinforcement Learning.
BIO
Graduated from Ecole Polytechnique in 2001 and then
from Telecom PariTech in 2003, Romain Laroche joined the dialogue team
at Orange in Paris, where he defended in 2010 a corporate PhD on
Reinforcement Learning for industrial dialogue systems at
Université Pierre et Marie Curie (Paris VI).
Romain joined Maluuba in 2016, and is now a researcher at Microsoft Research Maluuba. During the past 6 years he supervised 5 PhDs, 3 postdocs, and a dozen of undergrad interns. Counting more than 40 papers at international conferences, his interest focuses now on Reinforcement Learning. His preferred application domains are dialogue systems (still his primary real-world motivation), Atari games (for benchmarking against other algorithms), and navigation toy problems (for empirical analysis and algorithm design).