Greetings,
This week Razvan Pascanu will tell us about some of his recent work on optimization methods. Hope to see you there.
When: 15h00 Tomorrow (Thursday March 8th 2012) Where: LIDA Lab, (AA3256)
Title : Yet Another Optimization Technique for Machine Learning ?
Abstract: I've been trying to understand what Hessian Free or related methods are doing, how and why they can claim to be getting better generalization error. Why they work for RNNs better then SGD in certain cases?
I don't have all the answers, but I found a set of misnomers IMHO, and interesting connections between natural gradient and recently proposed methods. I want to provide a different interpretation of how fancier optimization should be done for complex non-linear models, and I want in the end to outline how one would go about improving proposed algorithms.
Cheers, Aaron