[Lisa_seminaires] REMINDER: UdeM-McGill-MITACS machine learning seminar Fri Nov. 16, 11:30am, MC 437

Hugo Larochelle larocheh at IRO.UMontreal.CA
Jeu 15 Nov 12:00:10 EST 2007


This week's seminar (see http://www.iro.umontreal.ca/article.php3? 
id_article=107&lang=en):


Bayesian Reinforcement Learning


by Mohammad Ghavamzadeh,
Department of Computing Science
University of Alberta

Location: McConnell Building (McGill), room 437
Time: November 16th 2007, 11h30

Reinforcement learning is a class of learning problems in which an  
agent interacts with an unfamiliar, dynamic and stochastic  
environment, and whose goal is to optimize some measure of its long- 
term performance. Despite extensive research and numerous successes  
in a number of different domains, there remain several fundamental  
obstacles hindering the widespread application of reinforcement  
learning methodology to real-world problems. Recent advances have  
shown that Bayesian approach to reinforcement learning offers viable  
solutions to some of these major limitations, such as the lack of  
confidence intervals for performance predictions, the difficulty of  
appropriately reconciling exploration with exploitation, and the lack  
of a systematic method for encoding prior knowledge and for  
formulating domain assumptions.

Policy gradient methods are reinforcement learning algorithms that  
adapt a parameterized policy by following a performance gradient  
estimate. This talk will present two Bayesian policy gradient  
algorithms. These algorithms use Gaussian processes to define prior  
distribution over the performance gradient, and obtain closed-form  
expressions for its posterior distribution, conditioned on the  
observed data. The posterior mean serves as the policy gradient  
estimate and is used to update the policy, while the posterior  
covariance allows us to gauge the reliability of the update. In the  
first algorithm, the basic observable unit, upon which learning and  
inference are based, is a complete trajectory, allowing the algorithm  
to handle non-Markovian systems. The second algorithm takes advantage  
of the Markov property of the system trajectories and uses individual  
state-action-reward transitions as its basic observable unit. This  
helps reduce variance in the gradient estimates and facilitates  
handling continuing problems.
_______________________________________________
Lisa_seminaires mailing list
Lisa_seminaires at mercure.iro.umontreal.ca
https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_seminaires



Plus d'informations sur la liste de diffusion Lisa_seminaires