Hi,
next *Thursday*, Feb. 11th at 3:30 pm we will have David Duvenaud give a talk at the department colloquium.
Title: Gradient-based hyperparameter optimization through reversible learning Who: David Duvenaud When: Thursday, Feb. 11th; 3:30 pm Where: AA 3195
Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. This lets us optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural net architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum. We'll also discuss related applications to nonlinear filtering in model-based reinforcement learning.
Looking forward to see you there,
j
Afficher les réponses par date
lisa_seminaires@iro.umontreal.ca