This week we have Jeffrey Pennington from Google Brain NYC giving a talk on Are Overparameterized Neural Networks Actually Just Linear Models? at 10h30 in room Mila Auditorium.
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? yes
See you there!
The Tea Talk Team
TITLE Are Overparameterized Neural Networks Actually Just Linear Models?
ABSTRACT
Neural networks define a rich and expressive class of functions whose properties and behaviors are very hard to describe from a theoretical perspective. Nevertheless, when these functions become highly overparameterized, a surprisingly simple characterization emerges. In this talk, I will discuss several perspectives on this characterization: 1) I will examine the prior over functions induced by common weight initialization schemes and show that it corresponds to a Gaussian process with a well-defined compositional kernel; 2) I will show that by tuning initialization hyperparameters, this kernel can be optimized for signal propagation, yielding networks that are trainable to enormous depths (10k+ layers); and 3) I will demonstrate that the learning dynamics of such overparameterized neural networks are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters.
BIO
Jeffrey Pennington is a Research Scientist at Google Brain, NYC. Prior to this, he was a postdoctoral fellow at Stanford University, as a member of the Stanford Artificial Intelligence Laboratory in the Natural Language Processing (NLP) group, where he studied the unsupervised learning of word representations. He received his Ph.D. in theoretical particle physics from Stanford University while working at the SLAC National Accelerator Laboratory, with a main focus on the development of calculational techniques in perturbative quantum field theory. Jeffrey’s current research interests center on the theory of deep learning, and include topics such as: trainability and expressivity, the dynamics of learning, the role of overparameterization, stochastic networks and random matrix theory, and the geometry of high-dimensional loss surfaces
Hi everyone,
Next week we will have an “extraordinary” invited talk ( organized by Christopher Beckham) on Monday :)
The regular tea talk will also be back on Friday ( announcement to come!)
See you there,
The Tea Talk Team
TITLE
Automated Machine Learning using OpenML.org <http://openml.org/>
ABTRACT
Algorithm Selection and Hyperparameter Optimization are two laborious but viable
components of the Machine Learning pipeline. The field of Automated Machine
Learning (AutoML) focusses, as the name suggests, on automating these processes.
Automated Machine Learning builds upon knowledge from previous experiments.
In this talk I will introduce OpenML.org <http://openml.org/>, an on line platform containing almost
10 million Machine Learning experiments and 20,000 monthly unique visitors.
Furthermore, I will talk about some of the projects that utilize this knowledge,
in particular for determining which hyperparameters are most important and how
to optimize them based on prior knowledge.
BIO
Jan van Rijn is founder of the OpenML Foundation, for Open Machine Learning research.
He has authored several publications related to Data Science, Automated Machine Learning (AutoML) and Artificial Intelligence.
He is currently a post-doctoral researcher at Columbia University in New York. For a complete and up to date list of his publications,
please see his Google Scholar profile: https://scholar.google.com/citations?user=O4X5CpwAAAAJ <https://scholar.google.com/citations?user=O4X5CpwAAAAJ>
This week we have David Blei from Columbia University giving a talk on The Blessings of Multiple Causes at 10h30 at Mila Auditorium.
Will this talk be streamed <https://mila.bluejeans.com/809027115/webrtc>? Yes
David will also be visiting Mila on Thursday (7th March). To that end we organized a discussion group on Causal Inference from 2pm to 3pm in room A14.
A bunch of you asked for 1-1 and 2-1 meetings : The schedule is also set during that day, I’ll send the invites by tomorrow :)
See you there!
The Tea Talk Team
TITLE The Blessings of Multiple Causes
ABSTRACT
Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods require that we observe all confounders, variables that correlate to both the causal variables (the treatment) and the effect of those variables (how well the treatment works). But whether we have observed all confounders is a famously untestable assumption. We describe the deconfounder, a way to do causal inference from observational data with weaker assumptions that the classical methods require. How does the deconfounder work? While traditional causal methods measure the effect of a single cause on an outcome, many modern scientific studies involve multiple causes, different variables whose effects are simultaneously of interest. The deconfounder uses the multiple causes as a signal for unobserved confounders, combining unsupervised machine learning and predictive model checking to perform causal inference. We describe the theoretical requirements for the deconfounder to provide unbiased causal estimates, and show that it requires weaker assumptions than classical causal inference. We analyze the deconfounder's performance in three types of studies: semi-simulated data around smoking and lung cancer, semi-simulated data around genomewide association studies, and a real dataset about actors and movie revenue. The deconfounder provides a checkable approach to estimating close-to-truth causal effects. This is joint work with Yixin Wang. [*] https://arxiv.org/abs/1805.06826
BIO
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), and a Guggenheim fellowship (2017). He is the co-editor-in-chief of the Journal of Machine Learning Research. He is a fellow of the ACM and the IMS. He is a fellow of the ACM and the IMS.