A reminder for today's talk by Ludovic Arnold. See you there !
---------- Forwarded message ----------
From: Guillaume Desjardins <guillaume.desjardins(a)gmail.com>
Date: Tue, Sep 13, 2011 at 10:26 AM
Subject: UdeM-McGill-MITACS machine learning seminar Thurs. Sep. 15th @
13h30, AA1175
To: lisa_seminaires(a)iro.umontreal.ca
A UdeM-McGill-MITACS machine learning seminar will be held this Thursday,
July 15th. The talk given by Ludovic Arnold, will take place from
13h30-14h30 in the room AA1175 (pavillon Andre-Aisenstadt, click
here<http://maps.google.ca/maps?q=2920,+chemin+de+la+tour&hl=en&z=16&iwloc=A>for
directions) at the Université de Montréal. Hope to see you there !
Title: Information-Geometric Optimization Algorithms: A Unifying Picture via
Invariance Principles
Abstract:
In the context of gradient descent for likelihood maximization, the choice
of a metric is critical. By using a so-called "natural gradient descent"
strategy in which a meaningful metric is chosen, one can benefit from
agreeable properties such as invariance w.r.t. parametrization and faster
convergence. After an introduction to natural gradient descent in the
canonical case of likelihood maximization, I will present a unifying
framework for Information Geometric Optimization which gives a new
understanding of Evolution Strategies such as CMA-ES, NES, CEM and PBIL.
A UdeM-McGill-MITACS machine learning seminar will be held this Thursday,
July 15th. The talk given by Ludovic Arnold, will take place from
13h30-14h30 in the room AA1175 (pavillon Andre-Aisenstadt, click
here<http://maps.google.ca/maps?q=2920,+chemin+de+la+tour&hl=en&z=16&iwloc=A>for
directions) at the Université de Montréal. Hope to see you there !
Title: Information-Geometric Optimization Algorithms: A Unifying Picture via
Invariance Principles
Abstract:
In the context of gradient descent for likelihood maximization, the choice
of a metric is critical. By using a so-called "natural gradient descent"
strategy in which a meaningful metric is chosen, one can benefit from
agreeable properties such as invariance w.r.t. parametrization and faster
convergence. After an introduction to natural gradient descent in the
canonical case of likelihood maximization, I will present a unifying
framework for Information Geometric Optimization which gives a new
understanding of Evolution Strategies such as CMA-ES, NES, CEM and PBIL.