[Lisa_teatalk] Tea-talk: today, Nov 29th, 3pm (LISA lab)

Guillaume Desjardins guillaume.desjardins at gmail.com
Thu Nov 29 09:02:59 EST 2012


Please join us for today's tea-talk. Caglar will be doing a practice
talk for his upcoming workshop poster at the NIPS Deep Learning
Workshop. Time-willing, I will then present the paper entitled
"Cardinality Restricted Boltzmann Machines" by Kevin Swersky et al. We
may also have special appearances by Ian Goodfellow and Razvan
Pascanu.

Abstract (Caglar Gulcehre):

We explore the effect of introducing prior information into the
intermediate level of deep learning algorithms for a learning task on
which all the state-of-the-art machine learning algorithms tested
failed to learn. We motivate our work from the hypothesis that humans
learn such intermediate concepts from other individuals via a form of
supervision or guidance using a curriculum. The experiments we have
conducted provide positive evidence in favor of this hypothesis. In
our experiments, a two-tiered MLP architecture is trained on a dataset
with 64x64 binary inputs images, each image with three sprites. The
final task is to decide whether all the sprites are the same or one of
them is different. Sprites are pentomino tetris shapes and they are
placed in an image with different locations using scaling and rotation
transformations. The first level of the two-tiered MLP is pretrained
with intermediate level targets being the presence of sprites at each
location, while the second level takes the output of the first level as
input and predicts the final task target binary event. The two-tiered
MLP architecture, with a few tens of thousand examples, was able to
learn the task perfectly, whereas all other algorithms (include
unsupervised pre-training, but also traditional algorithms like SVMs,
decision trees and boosting) all perform no better than chance. We
hypothesize that the optimization difficulty involved when the
intermediate pre-training is not performed is due to the composition
of two highly non-linear tasks. Our findings are also consistent with
hypotheses on cultural learning inspired by the observations of
optimization problems with deep learning, presumably because of
effective local minima.


More information about the Lisa_teatalk mailing list