There's a MITACS seminar at McGill this Thursday, too:
Approximate Inference for the Loss-Calibrated Bayesian
by Simon Lacoste-Julien Department of Engineering
University of Cambridge
Location: McConnell Engineering Bldg (McGill), room 103
Time: Thursday, December 16, 16:00
Bayesian decision theory provides a well-defined theoretical framework for rational decision making under uncertainty. However, even if we assume that our subjective beliefs about the world have been well-specified, we usually need to resort to approximations in order to use them in practice. Despite the central role of the loss in the decision theory formulation, most prevalent Bayesian approximation methods focus on approximating the posterior over parameters with no consideration of the loss. In this talk, our main point is to bring back in focus the need to *calibrate* the approximation methods to the *loss* under consideration. This philosophy has already been widely applied in the frequentist statistics / discriminative machine learning literature, as for example with the use of surrogate loss functions, but not in Bayesian statistics surprisingly. We provide examples showing the limitation of disregarding the loss in standard approximate inference schemes and outline several interesting research directions arising from this new perspective. As a first loss-calibrated attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to