[Lisa_seminaires] Reminder: UdeM-McGill-mPrime machine learning seminar tomorrow, Wed. Oct. 12th @ 14h00, room AA3195

Guillaume Desjardins guillaume.desjardins at gmail.com
Mar 11 Oct 20:50:51 EDT 2011


A reminder for tomorrow's mPrime talk by Jean-Francois Paiement. The
talk will be held   in AA3195 as usual. See you there !

---------- Forwarded message ----------
From: Guillaume Desjardins <guillaume.desjardins at gmail.com>
Date: Mon, Oct 10, 2011 at 10:39 PM
Subject: UdeM-McGill-mPrime machine learning seminar Wed. Oct. 12th @
14h00, location AA3195
To: lisa_seminaires at iro.umontreal.ca


A UdeM-McGill-mPrime machine learning seminar will be held this
Wednesday, Oct. 12th. The talk given by Jean-Francois Paiement, will
take place from 14h00-15h00. The talk will be held at the Université
de Montréal, but the precise room location will be determined shortly
and communicated via email. Hope to see you there !

Title: Learning from Heterogeneous Sources via Gradient Boosting Consensus

Abstract:

Multiple data sources containing different types of features may be
available for a given task. For instance, users' profiles can be used
to build recommendation systems. In addition, a model can also use
users' historical behaviors and social networks to infer users'
interests on related products. We argue that it is desirable to
collectively use any available multiple heterogeneous data sources in
order to build efficient learning models. We call this framework
"heterogeneous learning". In our proposed setting,      data sources
can include (i) non-overlapping features, (ii) non-overlapping
instances, and (iii) multiple networks (i.e. graphs) connecting
instances. In this paper, we propose a general optimization framework
for heterogeneous learning, and devise a corresponding learning model
from gradient boosting. The idea is to minimize the empirical loss
with two constraints: (1) There should be consensus among the
predictions of overlapping instances (if any) from different data
sources; (2) Connected instances in graph datasets may have similar
predictions. The objective function is solved by stochastic gradient
boosting trees. Furthermore, a weighting strategy is designed to
emphasize informative data sources, and deemphasize the noisy ones.
We formally prove that the proposed strategy leads to a tighter error
bound. This approach consistently outperforms a standard concatenation
of data sources on movie rating prediction, number recognition and
terrorist attack detection tasks. We observe that the proposed model
can improve out-of-sample error rate by as much as 80%.

This is joint work with Xiaoxiao Shi (University of Illinois at Chicago),
David Grangier (AT&T Labs), and Philip S. Yu (University of Illinois at
Chicago).


Plus d'informations sur la liste de diffusion Lisa_seminaires