[Lisa_seminaires] [Tea Talk] Brady Neal (MILA) Fri Apr 4 10:30AM AA1360

Michael Noukhovitch mnoukhov at gmail.com
Mer 4 Avr 12:35:19 EDT 2018


This week we have our very own *Brady Neal* giving a talk this *Friday* at
*10:30AM* in room *AA1360*.

I may be a little bit biased, but coming to this talk will be at least one
standard deviation better than your usual Friday!
Michael

*TITLE *Towards Understanding Generalization in Deep Learning by Revisiting
the Bias-Variance Decomposition

*KEYWORDS *DL Theory, ML Theory


*ABSTRACT*Generalization is at the very core of machine learning. The
bias-variance decomposition in machine learning is an underused lens to
view generalization through. While it’s more common to derive upper bounds
on the generalization gap via more complicated measures of complexity such
as the VC dimension and Rademacher complexity, the bias-variance
decomposition is an *equality* that is noticeably simpler. Looking through
this lens, we can quickly get to partial explanations for why larger neural
networks seem to generalize better than their smaller counterparts, despite
the fact that bounds based on VC dimension and Rademacher complexity
suggest the opposite. We appeal to some of the blessings of high
dimensionality to do this.

While Zhang et al. (2017) were quite surprised by the results of their
experiments in “Understanding deep learning requires rethinking
generalization,” they would have been much less surprised if they were
looking at their results through the lens of the bias-variance
decomposition.


*BIO*Brady was an intern at MILA and is now a Masters student at MILA with
Ioannis Mitliagkas. He organizes the DL Theory Reading Group and is
currently focused on generalization and optimization in deep learning
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180404/8cadba5e/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires