[Lisa_seminaires] [Tea Talk] Phil Bachman (Maluuba MS), Fri Oct 6, 10:30AM, AA6214

Michael Noukhovitch mnoukhov at gmail.com
Mar 3 Oct 13:33:09 EDT 2017


This week we have a researcher from Maluuba Microsoft, *Phil Bachman,*
giving a talk on *Friday Oct 6* at the new, earlier time: *10:30AM* in room
*AA6214*.

See you there!
Michael

*KEYWORDS*
Active learning, Meta-learning, Bayesian

*TITLE*
Metalearning and active learning -- like chocolate and peanut butter.

*ABSTRACT*
This talk expounds on the natural synergy of meta and active learning.

The general scheme of metalearning is to swap hard-coded procedures that
output models for trainable models (that output models). For example, one
can use many related classification problems to train a model which outputs
a classifier given an input set of labeled examples. In effect, the trained
"set2func" model replaces the role of, e.g., SGD in producing a classifier
for each problem.

>From a Bayesian perspective, the set2func model can be interpreted as
encoding both a prior over classifiers and a procedure for performing
inference w.r.t. the learned prior given some labeled data. Existing
approaches to meta classification can be interpreted as learning a prior
over classifiers, which is co-adapted with some mechanism for performing
MAP inference. E.g., the "base" parameters learned by MAML give the mean of
a Gaussian prior over parameterizations of the "base" model. Adapting the
base parameters to a new problem instance via SGD can be interpreted as
approximate posterior inference (see, e.g. "Early Stopping as
Non-parametric Variational Inference").

Current metalearning methods perform MAP inference, which can be
particularly limiting in the case of tiny sets of labeled data. Consider
the Omniglot meta classification task. If we have unlabeled data for all
classes in the current problem, but labeled data only for a subset of the
classes, a MAP-based model won't explicitly represent the "cluster-based"
label assignments on which this task is based. A "Bayesian" set2func model
could express this property of the underlying task distribution through the
classifiers assigned high likelihood by its cross-task prior and per-task
posteriors.

The role of active learning, in the setting of Bayesian metalearning, is to
collect labeled examples for the current problem in a way that maximally
reduces entropy in the posterior distribution over classifiers. Having an
explicit representation of the relevant posterior should permit more
effective active learning, and adapting the prior to the task distribution
should permit tighter, more-accurate per-task posteriors. Conversely,
encouraging the set2func model to represent its prior and posteriors in a
way that facilitates active learning may improve the quality of the learned
prior and posteriors.

Developing models which perform well in the few-shot setting
(metalearning), and which efficiently collect information to improve task
performance (active learning), will be critical to success in settings like
"life-long learning".

This talk discusses these themes, and presents some related concrete
results.
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20171003/faf09517/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires