[Lisa_seminaires] Talk: David Duvenaud on Gradient-based hyperparameter optimization through reversible learning

Jörg Bornschein bornj at iro.umontreal.ca
Jeu 4 Fév 12:57:47 EST 2016


Hi,

next *Thursday*, Feb. 11th at 3:30 pm we will have David Duvenaud give a
talk at the department colloquium.



Title: Gradient-based hyperparameter optimization through reversible
learning
Who: David Duvenaud
When: Thursday, Feb. 11th; 3:30 pm
Where: AA 3195

Tuning hyperparameters of learning algorithms is hard because gradients are
usually unavailable. We compute exact gradients of cross-validation
performance with respect to all hyperparameters by chaining derivatives
backwards through the entire training procedure. This lets us optimize
thousands of hyperparameters, including step-size and momentum schedules,
weight initialization distributions, richly parameterized regularization
schemes, and neural net architectures. We compute hyperparameter gradients
by exactly reversing the dynamics of stochastic gradient descent with
momentum.  We'll also discuss related applications to nonlinear filtering
in model-based reinforcement learning.



Looking forward to see you there,


   j
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20160204/719e66aa/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires