[Lisa_seminaires] [DIRO Talk] Pieter-Jan Kindermans (Google/Ghent) Thur Feb 22 11:00AM AA1355

Michael Noukhovitch mnoukhov at gmail.com
Mar 20 Fév 13:13:55 EST 2018


Continuing our special DIRO talk week, we have *Pieter-Jan Kindermans*
from *Ghent
University and Google Brain* giving a talk on *Thursday Feb 22* at *11:00AM*
in room *AA1355*. As always, email me if you'd be interested in speaking
with him!

This talk should be good because I've heard the speaker is great at
explaining ML!
Michael

*TITLE *Towards a reliably understanding and visualization of deep neural
networks

*KEYWORDS *Explainable ML, Testing Deep Learning, Deep Learning Theory


*ABSTRACT*Deep learning has transformed the field of machine learning.
Empirically these methods work brilliantly but it is difficult to
understand what exactly they have learned. If this were possible, we would
have a perfect understanding of our networks.

The community has spend quite a lot of effort on visualising what a neural
network learns. However while many methods were invented to better
understand deep neural networks, we can show that these methods do not
produce the theoretically correct explanation for a linear model, which is
a simple neural network. Despite this they are used on multi-layer networks
with millions of parameters.

We present the idea of creating unit tests for explanations. The idea
behind the unit test is that while it might be impossible to define what a
good explanation is, it is much easier to detect failure cases. Hence, by
creating more and more reliable unit tests, the community can refine its
methods iteratively and converge to a good solution.

Based on an analysis of linear models we propose a generalization that
yields two explanation techniques (PatternNet and PatternAttribution) that
are theoretically sound for linear models, pass our unit test and produce
improved explanations for deep networks.


*BIO*Pieter-Jan Kindermans obtained his PhD degree from Ghent Unviersity in
2014. From 2014 to 2017 he was a postdoc, as a Marie-Curie fellow, in the
lab of Klaus-Robert Müller in Berlin. Currently he is a Brain resident at
Google. Initially he worked on unsupervised learning for Brain-Computer
Interfaces (BCI) and demonstrated that unsupervised learning can replace
traditional supervised classifiers. A further extension of this work, in
collaboration with the universities of Freiburg and Ghent, ensured that the
unsupervised decoder is guarantueed to converge to the optimal supervised
solution. This project was nominated for the BCI award in 2017.In addition
to his work in BCI, he also explored different subfields of machine
learning, including deep learning, large scale training and its
applications such as molecular chemistry. His current focus is on better
understanding the operation of a deep neural network. His ultimate goal is
to make deep learning a knowledge extraction tool for science.
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180220/1051abb6/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires