Title: Computationally and statistically efficient methods for model
selection in graphical models
Speaker: Kevin Murphy (Stats/CS, U. British Columbia)
Time/Place: Friday, 11am in Macdonald-Harrington G-01
Abstract:
Graphical models are a way of representing conditional independence
relationships between random variables using graphs. In this talk, we
discuss ways of learning the structure of graphs from data. This is
useful for visualizing relationships between variables in
high-…
[View More]dimensional data, as well as for building density models for use in
prediction, classification, clustering, etc. We will focus on undirected
graphs (also called random fields), and in particular on methods based
on L1-penalized maximum likelihood. First we extend existing results for
the Gaussian and binary case to the more general case of conditional
random fields and multi-state models. This requires that we replace the
L1 penalty with a group L1 penalty, which poses various computational
challenges. The second extension is to estimate the group structure (by
clustering the variables) while simultaneously learning the graph
structure. This technique relies on new bounds on the partition function
for the positive definite matrix Laplace distribution, which also has
applications in hierarchical Bayesian analysis of multiple related graphs.
[View Less]
Friday's seminar (see
http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):
Machine Learning and Computational Linguistics: Tools of the Trade
(or, The Data Acquisition Mental Bottleneck)
by Jeremy Barnes
Location: Pavillon André-Aisenstadt (UdeM), room 3195
Time: March 27th, 15h00
Computational Linguistics is a field where Machine Learning algorithms
are ubiquitously applied. However, the gap between a good idea or a
published paper and a commercial system is often large.
…
[View More]The kinds of problems that can beset a machine learning practitioner
in this field will be dissected and the tools and techniques used to
overcome them explored.
A particular emphasis is placed on the mapping of linguistics problems
onto a machine learning problem and the complexities associated with
obtaining and using linguistic datasets.
In addition, common wisdom such as the "Data Acquisition Bottleneck"
is sometimes cited both as a reason for the ineffectiveness of a
machine learning solution and for abandoning promising approaches.
This bottleneck will be analyzed and shown to be maybe more of a
mental than a practical bottleneck.
The seminar should be relevant to both computational linguists and
machine learning practitioners, as well as anyone who is interested in
the large-scale use of machine learning to solve problems in an
industry environment. The tone will be relatively informal.
Bio:
Jeremy Barnes recently finished a 8 1/2 year stint at Idilia Inc, a
Montreal-based company that has developed and commercialized Word Sense
Disambiguation, knowledge extraction and paraphrasing technology see
(www.idilia.com). Beginning as the first R&D employee in the company,
he artitected the technology around machine learning and has applied
machine learning to numerous unsolved or semi-solved problems in the
domain.
[View Less]
Note that the seminar will be held in McConnell Engineering Building, room 103
-------
Sandra Zilles (postdoc at UofA) will be visiting again March 26-27,
and will be giving a talk on Thu March 26, 11:30am.
Title: Cooperative teaching, active learning, and sample compression
Abstract:
The problem of how a teacher and a learner can cooperate in the
process of learning concepts from examples in order to minimize the
required sample size without "coding tricks" has been widely addressed,
yet …
[View More]without achieving teaching and learning protocols that meet what
intuitively seems an optimal choice for selecting samples in teaching.
In this presentation, two models of cooperative teaching and learning
are introduced.
The model of "subset teaching sets" is based on the idea that both
teacher and learner can iteratively exploit the assumption that the
partner is cooperative, comparable to a two-player game.
The corresponding variant of the teaching dimension turns out to be
nonmonotonic with respect to subclasses of concept classes. That
means, there are concept classes that become easier to teach when
they are expanded. We will discuss why this nonmonotonicity might
be natural in cooperative teaching scenarios.
A second model introduced overcomes the nonmonotonicity of the
subset teaching dimension. "Recursive teaching sets" are based on
nested concept classes. The nesting here reflects the complexity of
teaching subclasses of the given concept class.
We will see how both new models can drastically reduce the sample
size required for teaching a concept - without using coding tricks
(for a simple and intuitive notion of "coding trick").
For instance, monomials can be taught with only two examples
independent of the number of variables (in both models).
It will be shown how this theory of cooperative learning can open
new ways of (a) showing inherent connections between teaching
and active learning and (b) tackling a long-standing open question
on sample compression.
(joint work with Robert Holte, Steffen Lange, and Martin Zinkevich)
Hope to see you there!
Doina
[View Less]
Vous êtes tous cordialement invités à assister à la soutenance de
thèse de doctorat de Hugo Larochelle, qui aura lieu ce vendredi 13
mars à 14h00 au local 3195 du pavillon André-Aisenstadt.
TITRE: Étude de techniques d’apprentissage non-supervisé pour
l’amélioration de l’entraînement supervisé de modèle connexionnistes
RÉSUMÉ:
Popularisé dans les années 80, le réseau de neurones artificiel est un
modèle puissant d'apprentissage automatique. Grâce à la découverte
d'une repré…
[View More]sentation riche de son entrée, ce type de modèle est
capable de résoudre des tâches complexes liées entre autre à la vision
et au traitement de la langue. Malheureusement, l'entraînement d'un
réseau de neurones est un problème difficile, à un point tel que les
réseaux typiquement entraînés contiennent souvent relativement peu de
neurones cachés, organisés en une seule couche.
Je présenterai donc les travaux de ma thèse de doctorat, qui visent à
tirer profit de l'apprentissage non-supervisé afin d'améliorer la
performance de différents types de réseaux de neurones supervisés.
L'utilisation de l'apprentissage non-supervisé permet ici d'augmenter
significativement la taille des réseaux pouvant être entraînés avec
succès.
En premier lieu, j'illustrerai les bénéfices de la combinaison des
apprentissages non-supervisé et supervisé dans le cadre de
l'entraînement d'une machine de Boltzmann restreinte, un type
particulier de réseaux de neurones à une seule couche cachée. Ensuite,
je présenterai une approche similaire basée sur un réseau de neurone
autoencodeur et permettant d'entraîner un réseau de neurones profond,
c'est-à-dire à plusieurs couches cachées. L'avantage de l'utilisation
d'un réseau de neurones profond sera démontrée sur une série de
problèmes de classification d'images générées à partir de plusieurs
facteurs de variation. Ensuite, je montrerai qu'il est possible
d'obtenir une amélioration additionnelle des performances en
encourageant explicitement un réseau de neurones à extraire une
représentation de l'entrée qui soit robuste, c'est-à-dire qui soit
invariante à l'absence d'une partie des éléments d'entrée. Finalement,
je décrirai une méthode simple pour introduire des interactions
d'inhibition et d'excitation entre les neurones cachés, permettant
d'extraire une représentation plus riche de l'entrée et pouvant aussi
améliorer la performance d'un réseau profond.
COMPOSITION DU JURY:
Président-Rapporteur: Pascal Vincent
Directeur de Recherche: Yoshua Bengio
Examinateur Externe: Geoffrey Hinton
Membre du Jury: Douglas Eck
Représentante du Doyen: Nathalie Loye
[View Less]
Tomorrow's seminar (see
http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):
Recent developments in learning deep networks
by Geoffrey Hinton
University of Toronto and
Canadian Institute for Advanced Research
Location: Pavillon André-Aisenstadt (UdeM), room 3195
Time: March 13th 2009, 10h30
It is possible to learn deep belief nets that are good at object
recognition by composing a number of simple modules, each of which
contains only one layer of hidden units. The layers …
[View More]are learned one at
a time by treating the hidden activities of one module as the data for
training the next module.
I will start by describing a new method for learning each module that
is faster than previous methods and gives better performance on test
data. Then I will describe a more powerful basic module for deep
learning. The module allows third-order, multiplicative interactions
in which hidden units gate the pairwise interactions between visible
units. A technique for factoring the third-order interactions leads to
a learning module that has a simple learning rule based on pairwise
correlations. This module looks remarkably like modules that have been
proposed by both biologists trying to explain the responses of neurons
and engineers trying to create systems that can recognize objects.
The talk will describe joint work with Tijmen Tieleman and Roland Memisevic.
[View Less]
This week's seminar (see
http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):
Recent developments in learning deep networks
by Geoffrey Hinton
University of Toronto and
Canadian Institute for Advanced Research
Location: Pavillon André-Aisenstadt (UdeM), room 3195
Time: March 13th 2009, 10h30
It is possible to learn deep belief nets that are good at object
recognition by composing a number of simple modules, each of which
contains only one layer of hidden units. The layers …
[View More]are learned one at
a time by treating the hidden activities of one module as the data for
training the next module.
I will start by describing a new method for learning each module that
is faster than previous methods and gives better performance on test
data. Then I will describe a more powerful basic module for deep
learning. The module allows third-order, multiplicative interactions
in which hidden units gate the pairwise interactions between visible
units. A technique for factoring the third-order interactions leads to
a learning module that has a simple learning rule based on pairwise
correlations. This module looks remarkably like modules that have been
proposed by both biologists trying to explain the responses of neurons
and engineers trying to create systems that can recognize objects.
The talk will describe joint work with Tijmen Tieleman and Roland Memisevic.
[View Less]