This week's seminar (see http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):
How Many Clusters? An Information-Theoretic Perspective
by Susanna Still,
Department of Information and Computer Sciences
University of Hawaii
Location: Pavillon Andre-Aisenstadt (UdeM), room 1409
Time: September 28th 2007, 11h30
Clustering provides a common means of identifying structure
in complex data, and there is renewed interest in clustering
as a tool for the analysis of large data sets in many fields.
Anatural question is how many clusters are appropriate for
the description of a given system. Traditional approaches to
this problem are based on either a framework in which clusters
of a particular shape are assumed as a model of the system or
on a two-step procedure in which a clustering criterion determines
the optimal assignments for a given number of clusters and a
separate criterion measures the goodness of the classification
to determine the number of clusters. In a statistical mechanics
approach, clustering can be seen as a trade-off between energy-
and entropy-like terms, with lower temperature driving the
proliferation of clusters to provide a more detailed description
of the data. For finite data sets, we expect that there is a
limit to the meaningful structure that can be resolved and
therefore a minimum temperature beyond which we will capture
sampling noise. This suggests that correcting the clustering
criterion for the bias that arises due to sampling errors will
allow us to find a clustering solution at a temperature that is
optimal in the sense that we capture maximal meaningful
structure-without having to define an external criterion for
the goodness or stability of the clustering. We show that in a
general information-theoretic framework, the finite size of a
data set determines an optimal temperature, and we introduce a
method for finding the maximal number of clusters that can be
resolved from the data in the hard clustering limit.
_______________________________________________
Lisa_seminaires mailing list
Lisa_seminaires(a)mercure.iro.umontreal.ca
https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_seminaires
This week's seminar (see http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):
How Many Clusters? An Information-Theoretic Perspective
by Susanna Still,
Department of Information and Computer Sciences
University of Hawaii
Location: Pavillon Andre-Aisenstadt (UdeM), room 1409
Time: September 28th 2007, 11h30
Clustering provides a common means of identifying structure in complex data, and there is renewed interest in clustering as a tool for the analysis of large data sets in many fields.Anatural question is how many clusters are appropriate for the description of a given system. Traditional approaches to this problem are based on either a framework in which clusters of a particular shape are assumed as a model of the system or on a two-step procedure in which a clustering criterion determines the optimal assignments for a given number of clusters and a separate criterion measures the goodness of the classification to determine the number of clusters. In a statistical mechanics approach, clustering can be seen as a trade-off between energy- and entropy-like terms, with lower temperature driving the proliferation of clusters to provide a more detailed description of the data. For finite data sets, we expect that there is a limit to the meaningful structure that can be resolved and therefore a minimum temperature beyond which we will capture sampling noise. This suggests that correcting the clustering criterion for the bias that arises due to sampling errors will allow us to find a clustering solution at a temperature that is optimal in the sense that we capture maximal meaningful structure-without having to define an external criterion for the goodness or stability of the clustering. We show that in a general information-theoretic framework, the finite size of a data set determines an optimal temperature, and we introduce a method for finding the maximal number of clusters that can be resolved from the data in the hard clustering limit.
The UdeM-McGill-MITACS Machine Learning seminars are back! Note that
this year, there will be seminars only every 2 weeks.
The seminar will be in room 437, NOT 103!
This week's seminar (see https://www.iro.umontreal.ca/article.php3?
id_article=107&lang=en).
Optimal Causal Inference
by Susanna Still,
Department of Information and Computer Sciences
University of Hawaii
Location: McConnell Engineering building (McGill), room 437
Time: September 14th 2007, 11h30
I will talk about how theory building can naturally distinguish between
regularity and randomness. Starting from basic modeling principles I
will argue for a general information-theoretic objective function that
embodies a trade-off between a model’s complexity and its predictive
power. The family of solutions derived from this principle corresponds
to a hierarchy of models. At each level of complexity, those models
achieve maximal predictive power, and in the limit of optimal prediction
a process’ exact causal organization is identified. Examples show how
theory building can profit from analyzing a process’ causal
compressibility, which is reflected in the optimal models’
rate-distortion curve.
The UdeM-McGill-MITACS Machine Learning seminars are back! Note that
this year, there will be seminars only every 2 weeks.
This week's seminar (see https://www.iro.umontreal.ca/article.php3?
id_article=107&lang=en).
Optimal Causal Inference
by Susanna Still,
Department of Information and Computer Sciences
University of Hawaii
Location: McConnell Engineering building (McGill), room 103
Time: September 14th 2007, 11h30
I will talk about how theory building can naturally distinguish between
regularity and randomness. Starting from basic modeling principles I
will argue for a general information-theoretic objective function that
embodies a trade-off between a model’s complexity and its predictive
power. The family of solutions derived from this principle corresponds
to a hierarchy of models. At each level of complexity, those models
achieve maximal predictive power, and in the limit of optimal prediction
a process’ exact causal organization is identified. Examples show how
theory building can profit from analyzing a process’ causal
compressibility, which is reflected in the optimal models’
rate-distortion curve.