[Lisa_seminaires] [Tea Talk] Jeffrey Pennington (Google Brain NYC) Fri 29 March 2019 10h30 Mila Auditorium

Rim Assouel rim.assouel at gmail.com
Ven 29 Mar 09:38:49 EDT 2019


Reminder that this is in 45 minutes :) !



> On Mar 25, 2019, at 11:36 AM, Rim Assouel <rim.assouel at gmail.com> wrote:
> 
> This week we have Jeffrey Pennington from Google Brain NYC giving a talk on Are Overparameterized Neural Networks Actually Just Linear Models? at 10h30 in room Mila Auditorium. 
> 
> Will this talk be streamed  <https://mila.bluejeans.com/809027115/webrtc>? yes
> 
> See you there! 
> The Tea Talk Team 
> 
> TITLE Are Overparameterized Neural Networks Actually Just Linear Models?
> 
> ABSTRACT 
> Neural networks define a rich and expressive class of functions whose properties and behaviors are very hard to describe from a theoretical perspective. Nevertheless, when these functions become highly overparameterized, a surprisingly simple characterization emerges. In this talk, I will discuss several perspectives on this characterization: 1) I will examine the prior over functions induced by common weight initialization schemes and show that it corresponds to a Gaussian process with a well-defined compositional kernel; 2) I will show that by tuning initialization hyperparameters, this kernel can be optimized for signal propagation, yielding networks that are trainable to enormous depths (10k+ layers); and 3) I will demonstrate that the learning dynamics of such overparameterized neural networks are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. 
> 
> BIO 
> Jeffrey Pennington is a Research Scientist at Google Brain, NYC. Prior to this, he was a postdoctoral fellow at Stanford University, as a member of the Stanford Artificial Intelligence Laboratory in the Natural Language Processing (NLP) group, where he studied the unsupervised learning of word representations. He received his Ph.D. in theoretical particle physics from Stanford University while working at the SLAC National Accelerator Laboratory, with a main focus on the development of calculational techniques in perturbative quantum field theory. Jeffrey’s current research interests center on the theory of deep learning, and include topics such as: trainability and expressivity, the dynamics of learning, the role of overparameterization, stochastic networks and random matrix theory, and the geometry of high-dimensional loss surfaces

-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: <http://mailman.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20190329/dcc99e9b/attachment.html>


Plus d'informations sur la liste de diffusion Lisa_seminaires