[Lisa_seminaires] [Lisa_montreal] [Tea Talk] Brady Neal (MILA) Fri Apr 4 10:30AM AA1360

Brady Neal bradyneal11 at gmail.com
Ven 6 Avr 14:18:49 EDT 2018


Thank you to all who came! I much appreciate the
comments/questions/feedback. If there is a nagging detail that has been
growing in your mind, do message me about it. If you brought up a point or
criticism during the talk that you don't think was adequately addressed or
answered with "I don't know," I'd greatly appreciate if you message me
about it.

I've attached the slides.

Best,
Brady

On Fri, Apr 6, 2018 at 10:24 AM, Michael Noukhovitch <mnoukhov at gmail.com>
wrote:

> And announcing for the first time ever this talk will be **LIVESTREAMED**
>
> Watch the livestream here (but please mute your microphone)
> https://bluejeans.com/809027115/browser
>
>
> On Fri, Apr 6, 2018 at 10:07 AM Michael Noukhovitch <mnoukhov at gmail.com>
> wrote:
>
>> Reminder this is in 20 minutes!
>>
>> On Wed, Apr 4, 2018, 12:35 Michael Noukhovitch <mnoukhov at gmail.com>
>> wrote:
>>
>>> This week we have our very own *Brady Neal* giving a talk this *Friday*
>>> at *10:30AM* in room *AA1360*.
>>>
>>> I may be a little bit biased, but coming to this talk will be at least
>>> one standard deviation better than your usual Friday!
>>> Michael
>>>
>>> *TITLE *Towards Understanding Generalization in Deep Learning by
>>> Revisiting the Bias-Variance Decomposition
>>>
>>> *KEYWORDS *DL Theory, ML Theory
>>>
>>>
>>> *ABSTRACT*Generalization is at the very core of machine learning. The
>>> bias-variance decomposition in machine learning is an underused lens to
>>> view generalization through. While it’s more common to derive upper bounds
>>> on the generalization gap via more complicated measures of complexity such
>>> as the VC dimension and Rademacher complexity, the bias-variance
>>> decomposition is an *equality* that is noticeably simpler. Looking
>>> through this lens, we can quickly get to partial explanations for why
>>> larger neural networks seem to generalize better than their smaller
>>> counterparts, despite the fact that bounds based on VC dimension and
>>> Rademacher complexity suggest the opposite. We appeal to some of the
>>> blessings of high dimensionality to do this.
>>>
>>> While Zhang et al. (2017) were quite surprised by the results of their
>>> experiments in “Understanding deep learning requires rethinking
>>> generalization,” they would have been much less surprised if they were
>>> looking at their results through the lens of the bias-variance
>>> decomposition.
>>>
>>>
>>> *BIO*Brady was an intern at MILA and is now a Masters student at MILA
>>> with Ioannis Mitliagkas. He organizes the DL Theory Reading Group and is
>>> currently focused on generalization and optimization in deep learning
>>>
>>
> _______________________________________________
> Lisa_montreal mailing list
> Lisa_montreal at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_montreal
>
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180406/52c87b68/attachment-0001.html 
-------------- section suivante --------------
Une pièce jointe non texte a été nettoyée...
Nom: Bias-Variance Decomposition.pdf
Type: application/pdf
Taille: 3553720 octets
Desc: non disponible
Url: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180406/52c87b68/attachment-0001.pdf 


Plus d'informations sur la liste de diffusion Lisa_seminaires