Stochastic gradient descent is a well-known method for the training of
the weights of a neural network. Since the minimization of the empirical loss
corresponds to an optimization problem defined over a statistical model,
the direction of steepest descent is given by the natural
gradient, i.e., the Riemannian gradient over a statistical manifold evaluated with
respect to the Fisher information metric. However, in the general case, the natural
gradient requires the evaluation of the inverse Fisher information
matrix, which can be computationally unfeasible for large networks.
Different approaches to overcome this issue have been proposed in
the literature. In the first part of the talk we introduce natural gradient in the context of
manifold optimization, next we review different training algorithms based on natural gradient,
which have been proposed in the literature of neural networks. Finally, in the last part of the
presentation, we describe different approaches to the efficient computation of the natural
gradient, which are used in stochastic optimization. Natural gradient
methods for the optimization of the stochastic relaxation of a function, in particular in the high-dimensional setting, could inspire the design of novel strategies for the efficient training of large neural networks.