[Lisa_seminaires] Tea talk Wednesday 9th April @13:00 AA3195 by David WF and Razvan

Razvan Pascanu r.pascanu at gmail.com
Mar 8 Avr 03:22:52 EDT 2014


Kind remainder.

Best,
Razvan


On Fri, Apr 4, 2014 at 10:15 AM, Razvan Pascanu <r.pascanu at gmail.com> wrote:

> Hi all,
>
> This Wednesday, same place AA3195, same time 13:00, we have a double tea
> talk featured. First David will present his (and co-authors) paper accepted
> at ICLR 2014. After that I will do a practice talk for my oral at ICLR as
> well.
>
> Be warned that this tea-talk might take longer than 1h all together.
>
>
> Talk by: David Warde Farley
>
> Title :An empirical analysis of dropout in piecewise linear networks
>
> Abstract:
>
> The recently introduced dropout training criterion for neural networks has
> been the subject of much attention due to its simplicity and remarkable
> effectiveness as a regularizer, as well as its interpretation as a training
> procedure for an exponentially large ensemble of networks that share
> parameters. In this work we empirically investigate several questions
> related to the efficacy of dropout, specifically as it concerns networks
> employing the popular rectified linear activation function. We investigate
> the quality of the test time weight-scaling inference procedure by
> evaluating the geometric average exactly in small models, as well as
> compare the performance of the geometric mean to the arithmetic mean more
> commonly employed by ensemble techniques. We explore the effect of tied
> weights on the ensemble interpretation by training ensembles of masked
> networks without tied weights. Finally, we investigate an alternative
> criterion based on a biased estimator of the maximum likelihood ensemble
> gradient.
>
> paper on openreview<http://openreview.net/document/f4c625c6-b0eb-4fd3-ab50-25182fe68733#f4c625c6-b0eb-4fd3-ab50-25182fe68733>
>
>
> Talk by: Razvan Pascanu (practice talk of 15 +5 minutes oral presentation)
>
> Tile : Revisiting natural gradient for deep networks
>
> Abstract:
>
> The aim of this paper is three-fold. First we show that Hessian-Free
> (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can
> be described as implementations of natural gradient descent due to their
> use of the extended Gauss-Newton approximation of the Hessian. Secondly we
> re-derive natural gradient from basic principles, contrasting the
> difference between two versions of the algorithm found in the neural
> network literature, as well as highlighting a few differences between
> natural gradient and typical second order methods. Lastly we show
> empirically that natural gradient can be robust to overfitting and
> particularly it can be robust to the order in which the training data is
> presented to the model.
>
> paper on openreview<http://openreview.net/document/1cd7651c-8029-457e-ae24-5fbca0f3a6a7#1cd7651c-8029-457e-ae24-5fbca0f3a6a7>
>
>
> I hope to see many of you there,
> Best,
>
> Razvan
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: https://webmail.iro.umontreal.ca/mailman/private/lisa_seminaires/attachments/20140408/7eae6f24/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires