The UdeM-McGill-MITACS machine learning seminar series is continuing its fall schedule.* Friday*'s seminar:
*Adaptive stochastic search: tuning Gaussians' covariances*
by Rémi Bardenet and Djalel Benbouzid University Paris-Sud XI
*Location*: Pavillon André-Aisenstadt (UdeM), room AA-3195 *Time*: *Friday, December 3, 10:30 (!)*
*Abstract*: Stochastic search algorithms are ubiquitous in optimization and statistics: they are at the core of some of the most efficient blackbox optimization techniques (e.g., Evolutionary Strategies), and of great practical use in Bayesian inference (e.g., in Metropolis-Hastings algorithms). In this talk I will concentrate on stochastic search techniques with Gaussian proposals that learn from their exploration to tune their covariance. After a quick overview of the field and methods, I will present 1) an evolutionary optimization algorithm with mixture proposals and its application to a Bayesian optimization problem, and 2) a novel attempt-in-progress at defining nonlinear adaptive Gaussian proposals. The first item deals with Gaussian Process based surrogate optimization: when the function to optimize is costly to evaluate (e.g., hyperparameter optimization in Machine Learning), one often relies on a surrogate model that is learnt on the fly, but optimizing this auxiliary surrogate can be a difficult task itself. The second problem is about mixing good properties of adaptive MCMC with reproducing kernel Hilbert spaces to exploit nonlinearity in the data.
Afficher les réponses par date
Reminder!
---------- Forwarded message ---------- From: Dumitru Erhan erhandum@iro.umontreal.ca Date: Mon, Nov 29, 2010 at 15:17 Subject: UdeM-McGill-MITACS machine learning seminar Fri Dec. 3@10h30, AA-3195 To: lisa_seminaires@iro.umontreal.ca
The UdeM-McGill-MITACS machine learning seminar series is continuing its fall schedule.* Friday*'s seminar:
*Adaptive stochastic search: tuning Gaussians' covariances*
by Rémi Bardenet and Djalel Benbouzid University Paris-Sud XI
*Location*: Pavillon André-Aisenstadt (UdeM), room AA-3195 *Time*: *Friday, December 3, 10:30 (!)*
*Abstract*: Stochastic search algorithms are ubiquitous in optimization and statistics: they are at the core of some of the most efficient blackbox optimization techniques (e.g., Evolutionary Strategies), and of great practical use in Bayesian inference (e.g., in Metropolis-Hastings algorithms). In this talk I will concentrate on stochastic search techniques with Gaussian proposals that learn from their exploration to tune their covariance. After a quick overview of the field and methods, I will present 1) an evolutionary optimization algorithm with mixture proposals and its application to a Bayesian optimization problem, and 2) a novel attempt-in-progress at defining nonlinear adaptive Gaussian proposals. The first item deals with Gaussian Process based surrogate optimization: when the function to optimize is costly to evaluate (e.g., hyperparameter optimization in Machine Learning), one often relies on a surrogate model that is learnt on the fly, but optimizing this auxiliary surrogate can be a difficult task itself. The second problem is about mixing good properties of adaptive MCMC with reproducing kernel Hilbert spaces to exploit nonlinearity in the data.
lisa_seminaires@iro.umontreal.ca