Hi everyone,
shortly after NIPS, on Monday, we'll have a talk by Luigi Malagò, Professor at Shinshu University. He will talk about Natural Gradient-based Algorithms for Training of Neural Networks.
hope you are all enjoying the conference and see you soon :)
j
-- Who: Luigi Malagò Title: Natural Gradient-based Algorithms for the Training of Neural Networks When: Monday, 14th December from 1 to 2 pm. Where: AA3195
Abstract:
Stochastic gradient descent is a well-known method for the training of the weights of a neural network. Since the minimization of the empirical loss corresponds to an optimization problem defined over a statistical model, the direction of steepest descent is given by the natural gradient, i.e., the Riemannian gradient over a statistical manifold evaluated with respect to the Fisher information metric. However, in the general case, the natural gradient requires the evaluation of the inverse Fisher information matrix, which can be computationally unfeasible for large networks. Different approaches to overcome this issue have been proposed in the literature. In the first part of the talk we introduce natural gradient in the context of manifold optimization, next we review different training algorithms based on natural gradient, which have been proposed in the literature of neural networks. Finally, in the last part of the presentation, we describe different approaches to the efficient computation of the natural gradient, which are used in stochastic optimization. Natural gradient methods for the optimization of the stochastic relaxation of a function, in particular in the high-dimensional setting, could inspire the design of novel strategies for the efficient training of large neural networks.
Afficher les réponses par date
Hi,
just a quick reminder:
shortly after NIPS, on Monday, we'll have a talk by Luigi Malagò, Professor at Shinshu University. He will talk about Natural Gradient-based Algorithms for the Training of Neural Networks.
hope you are all enjoying the conference and see you soon :)
j
-- Who: Luigi Malagò Title: Natural Gradient-based Algorithms for the Training of Neural Networks When: Monday, 14th December from 1 to 2 pm. Where: AA3195
Abstract:
Stochastic gradient descent is a well-known method for the training of the weights of a neural network. Since the minimization of the empirical loss corresponds to an optimization problem defined over a statistical model, the direction of steepest descent is given by the natural gradient, i.e., the Riemannian gradient over a statistical manifold evaluated with respect to the Fisher information metric. However, in the general case, the natural gradient requires the evaluation of the inverse Fisher information matrix, which can be computationally unfeasible for large networks. Different approaches to overcome this issue have been proposed in the literature. In the first part of the talk we introduce natural gradient in the context of manifold optimization, next we review different training algorithms based on natural gradient, which have been proposed in the literature of neural networks. Finally, in the last part of the presentation, we describe different approaches to the efficient computation of the natural gradient, which are used in stochastic optimization. Natural gradient methods for the optimization of the stochastic relaxation of a function, in particular in the high-dimensional setting, could inspire the design of novel strategies for the efficient training of large neural networks.
Sorry for the last minute announcement / change:
The talk is in room 1411 , which is downstairs.
see you,
j
On Sun, Dec 13, 2015 at 12:55 PM Jörg Bornschein bornj@iro.umontreal.ca wrote:
Hi,
just a quick reminder:
shortly after NIPS, on Monday, we'll have a talk by Luigi Malagò, Professor at Shinshu University. He will talk about Natural Gradient-based Algorithms for the Training of Neural Networks.
hope you are all enjoying the conference and see you soon :)
j
-- Who: Luigi Malagò Title: Natural Gradient-based Algorithms for the Training of Neural Networks When: Monday, 14th December from 1 to 2 pm. Where: AA3195
Abstract:
Stochastic gradient descent is a well-known method for the training of the weights of a neural network. Since the minimization of the empirical loss corresponds to an optimization problem defined over a statistical model, the direction of steepest descent is given by the natural gradient, i.e., the Riemannian gradient over a statistical manifold evaluated with respect to the Fisher information metric. However, in the general case, the natural gradient requires the evaluation of the inverse Fisher information matrix, which can be computationally unfeasible for large networks. Different approaches to overcome this issue have been proposed in the literature. In the first part of the talk we introduce natural gradient in the context of manifold optimization, next we review different training algorithms based on natural gradient, which have been proposed in the literature of neural networks. Finally, in the last part of the presentation, we describe different approaches to the efficient computation of the natural gradient, which are used in stochastic optimization. Natural gradient methods for the optimization of the stochastic relaxation of a function, in particular in the high-dimensional setting, could inspire the design of novel strategies for the efficient training of large neural networks.