This week we have our very own *Brady Neal* giving a talk this *Friday* at *10:30AM* in room *AA1360*.
I may be a little bit biased, but coming to this talk will be at least one standard deviation better than your usual Friday! Michael
*TITLE *Towards Understanding Generalization in Deep Learning by Revisiting the Bias-Variance Decomposition
*KEYWORDS *DL Theory, ML Theory
*ABSTRACT*Generalization is at the very core of machine learning. The bias-variance decomposition in machine learning is an underused lens to view generalization through. While it’s more common to derive upper bounds on the generalization gap via more complicated measures of complexity such as the VC dimension and Rademacher complexity, the bias-variance decomposition is an *equality* that is noticeably simpler. Looking through this lens, we can quickly get to partial explanations for why larger neural networks seem to generalize better than their smaller counterparts, despite the fact that bounds based on VC dimension and Rademacher complexity suggest the opposite. We appeal to some of the blessings of high dimensionality to do this.
While Zhang et al. (2017) were quite surprised by the results of their experiments in “Understanding deep learning requires rethinking generalization,” they would have been much less surprised if they were looking at their results through the lens of the bias-variance decomposition.
*BIO*Brady was an intern at MILA and is now a Masters student at MILA with Ioannis Mitliagkas. He organizes the DL Theory Reading Group and is currently focused on generalization and optimization in deep learning
Afficher les réponses par date
Reminder this is in 20 minutes!
On Wed, Apr 4, 2018, 12:35 Michael Noukhovitch mnoukhov@gmail.com wrote:
This week we have our very own *Brady Neal* giving a talk this *Friday* at *10:30AM* in room *AA1360*.
I may be a little bit biased, but coming to this talk will be at least one standard deviation better than your usual Friday! Michael
*TITLE *Towards Understanding Generalization in Deep Learning by Revisiting the Bias-Variance Decomposition
*KEYWORDS *DL Theory, ML Theory
*ABSTRACT*Generalization is at the very core of machine learning. The bias-variance decomposition in machine learning is an underused lens to view generalization through. While it’s more common to derive upper bounds on the generalization gap via more complicated measures of complexity such as the VC dimension and Rademacher complexity, the bias-variance decomposition is an *equality* that is noticeably simpler. Looking through this lens, we can quickly get to partial explanations for why larger neural networks seem to generalize better than their smaller counterparts, despite the fact that bounds based on VC dimension and Rademacher complexity suggest the opposite. We appeal to some of the blessings of high dimensionality to do this.
While Zhang et al. (2017) were quite surprised by the results of their experiments in “Understanding deep learning requires rethinking generalization,” they would have been much less surprised if they were looking at their results through the lens of the bias-variance decomposition.
*BIO*Brady was an intern at MILA and is now a Masters student at MILA with Ioannis Mitliagkas. He organizes the DL Theory Reading Group and is currently focused on generalization and optimization in deep learning
And announcing for the first time ever this talk will be **LIVESTREAMED**
Watch the livestream here (but please mute your microphone) https://bluejeans.com/809027115/browser
On Fri, Apr 6, 2018 at 10:07 AM Michael Noukhovitch mnoukhov@gmail.com wrote:
Reminder this is in 20 minutes!
On Wed, Apr 4, 2018, 12:35 Michael Noukhovitch mnoukhov@gmail.com wrote:
This week we have our very own *Brady Neal* giving a talk this *Friday* at *10:30AM* in room *AA1360*.
I may be a little bit biased, but coming to this talk will be at least one standard deviation better than your usual Friday! Michael
*TITLE *Towards Understanding Generalization in Deep Learning by Revisiting the Bias-Variance Decomposition
*KEYWORDS *DL Theory, ML Theory
*ABSTRACT*Generalization is at the very core of machine learning. The bias-variance decomposition in machine learning is an underused lens to view generalization through. While it’s more common to derive upper bounds on the generalization gap via more complicated measures of complexity such as the VC dimension and Rademacher complexity, the bias-variance decomposition is an *equality* that is noticeably simpler. Looking through this lens, we can quickly get to partial explanations for why larger neural networks seem to generalize better than their smaller counterparts, despite the fact that bounds based on VC dimension and Rademacher complexity suggest the opposite. We appeal to some of the blessings of high dimensionality to do this.
While Zhang et al. (2017) were quite surprised by the results of their experiments in “Understanding deep learning requires rethinking generalization,” they would have been much less surprised if they were looking at their results through the lens of the bias-variance decomposition.
*BIO*Brady was an intern at MILA and is now a Masters student at MILA with Ioannis Mitliagkas. He organizes the DL Theory Reading Group and is currently focused on generalization and optimization in deep learning
Thank you to all who came! I much appreciate the comments/questions/feedback. If there is a nagging detail that has been growing in your mind, do message me about it. If you brought up a point or criticism during the talk that you don't think was adequately addressed or answered with "I don't know," I'd greatly appreciate if you message me about it.
I've attached the slides.
Best, Brady
On Fri, Apr 6, 2018 at 10:24 AM, Michael Noukhovitch mnoukhov@gmail.com wrote:
And announcing for the first time ever this talk will be **LIVESTREAMED**
Watch the livestream here (but please mute your microphone) https://bluejeans.com/809027115/browser
On Fri, Apr 6, 2018 at 10:07 AM Michael Noukhovitch mnoukhov@gmail.com wrote:
Reminder this is in 20 minutes!
On Wed, Apr 4, 2018, 12:35 Michael Noukhovitch mnoukhov@gmail.com wrote:
This week we have our very own *Brady Neal* giving a talk this *Friday* at *10:30AM* in room *AA1360*.
I may be a little bit biased, but coming to this talk will be at least one standard deviation better than your usual Friday! Michael
*TITLE *Towards Understanding Generalization in Deep Learning by Revisiting the Bias-Variance Decomposition
*KEYWORDS *DL Theory, ML Theory
*ABSTRACT*Generalization is at the very core of machine learning. The bias-variance decomposition in machine learning is an underused lens to view generalization through. While it’s more common to derive upper bounds on the generalization gap via more complicated measures of complexity such as the VC dimension and Rademacher complexity, the bias-variance decomposition is an *equality* that is noticeably simpler. Looking through this lens, we can quickly get to partial explanations for why larger neural networks seem to generalize better than their smaller counterparts, despite the fact that bounds based on VC dimension and Rademacher complexity suggest the opposite. We appeal to some of the blessings of high dimensionality to do this.
While Zhang et al. (2017) were quite surprised by the results of their experiments in “Understanding deep learning requires rethinking generalization,” they would have been much less surprised if they were looking at their results through the lens of the bias-variance decomposition.
*BIO*Brady was an intern at MILA and is now a Masters student at MILA with Ioannis Mitliagkas. He organizes the DL Theory Reading Group and is currently focused on generalization and optimization in deep learning
Lisa_montreal mailing list Lisa_montreal@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_montreal
lisa_seminaires@iro.umontreal.ca