Please join us this upcoming Thursday (June 6th) for the following talk given by Anoop Korattikara (student of Max Welling). The talk will be held at 2:00pm in AA3195 as usual.
Title: Markov Chain Monte Carlo and the Bias-Variance Tradeoff
Bayesian posterior sampling can be painfully slow on very large datasets, since traditional MCMC methods such as Hybrid Monte Carlo are designed to be asymptotically unbiased and require processing the entire dataset to generate each sample. Thus, given a small amount of sampling time, the variance of estimates computed using such methods could be prohibitive. We argue that lower risk estimates can often be obtained using ™approximate™ MCMC methods that mix very fast (and thus lower the variance quickly) at the expense of a small bias in the stationary distribution. I will first talk about two such biased algorithms: Stochastic Gradient Langevin Dynamics and its successor Stochastic Gradient Fisher Scoring, both of which use stochastic gradients estimated from mini-batches of data, allowing them to mix very fast. Then I will present our current work on a new (biased) MCMC algorithm that uses a sequential hypothesis test to approximate the Metropolis-Hastings test, allowing us to accept/reject samples with high confidence using only a fraction of the data required for the exact test.
Afficher les réponses par date
Reminder:
Begin forwarded message:
From: Guillaume Desjardins guillaume.desjardins@gmail.com Subject: [Lisa_labo] Talk by Anoop Korattikara: Thursday, June 6th at 2:00pm (AA3195) Date: 3 juin 2013 09:24:01 HAE To: lisa_seminaires@iro.umontreal.ca, lisa_teatalk@iro.umontreal.ca Cc: kb.anoop@gmail.com, lisa LABO lisa_labo@iro.umontreal.ca
Please join us this upcoming Thursday (June 6th) for the following talk given by Anoop Korattikara (student of Max Welling). The talk will be held at 2:00pm in AA3195 as usual.
Title: Markov Chain Monte Carlo and the Bias-Variance Tradeoff
Bayesian posterior sampling can be painfully slow on very large datasets, since traditional MCMC methods such as Hybrid Monte Carlo are designed to be asymptotically unbiased and require processing the entire dataset to generate each sample. Thus, given a small amount of sampling time, the variance of estimates computed using such methods could be prohibitive. We argue that lower risk estimates can often be obtained using ™approximate™ MCMC methods that mix very fast (and thus lower the variance quickly) at the expense of a small bias in the stationary distribution. I will first talk about two such biased algorithms: Stochastic Gradient Langevin Dynamics and its successor Stochastic Gradient Fisher Scoring, both of which use stochastic gradients estimated from mini-batches of data, allowing them to mix very fast. Then I will present our current work on a new (biased) MCMC algorithm that uses a sequential hypothesis test to approximate the Metropolis-Hastings test, allowing us to accept/reject samples with high confidence using only a fraction of the data required for the exact test. _______________________________________________ Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
lisa_seminaires@iro.umontreal.ca