Please join us this upcoming Thursday (June 6th) for the following
talk given by Anoop Korattikara (student of Max Welling). The talk
will be held at 2:00pm in AA3195 as usual.
Title: Markov Chain Monte Carlo and the Bias-Variance Tradeoff
Bayesian posterior sampling can be painfully slow on very large
datasets, since traditional MCMC methods such as Hybrid Monte Carlo
are designed to be asymptotically unbiased and require processing the
entire dataset to generate each sample. Thus, given a small amount of
sampling time, the variance of estimates computed using such methods
could be prohibitive. We argue that lower risk estimates can often be
obtained using ™approximate™ MCMC methods that mix very fast (and thus
lower the variance quickly) at the expense of a small bias in the
stationary distribution. I will first talk about two such biased
algorithms: Stochastic Gradient Langevin Dynamics and its successor
Stochastic Gradient Fisher Scoring, both of which use stochastic
gradients estimated from mini-batches of data, allowing them to mix
very fast. Then I will present our current work on a new (biased) MCMC
algorithm that uses a sequential hypothesis test to approximate the
Metropolis-Hastings test, allowing us to accept/reject samples with
high confidence using only a fraction of the data required for the
exact test.