[Lisa_seminaires] Fwd: Predoc Presentation Abstract

Yoshua Bengio yoshua.bengio at gmail.com
Ven 17 Jan 08:05:53 EST 2014


Hi all,

Caglar Gulcehre will defend his thesis proposal next Tuesday at 1:30pm in
room 3195 (unless I am mistaken). Please mark your calendars and come to
participate in this important event in the life of a PhD student and learn
about Caglar's research.

-- Yoshua


---------- Forwarded message ----------
From: Çağlar Gülçehre <ca9lar at gmail.com>
Date: Fri, Jan 17, 2014 at 5:06 AM
Subject: Predoc Presentation Abstract
To: Yoshua Bengio <bengioy at iro.umontreal.ca>
Cc: Alain Tapp <tappa at iro.umontreal.ca>, Roland Memisevic <
roland.memisevic at gmail.com>


Dear Professors,

Please find the title of and abstract of my predoc presentation below:
Title:On Challenges of Learning Abstract Tasks with Deep Learning

Abstract:

Deep learning algorithms have great successes on different benchmarks in
the recent years. Deep learning algorithms rely on combining layers of
latent factors into hierarchies. The higher levels of the hiearchy can
represent more abstract or higher-level features. Nevertheless it is still
a weakly-understood phenomenon that under what conditions the learning
dynamics during the training of deep learning algorithms give rise to those
higher-level features. Motivated from the cultural learning and effects of
intermediate hints learned from other individuals on learning high level
abstractions, we investigated the performance of several machine learning
algorithms on a binary artificial dataset. As a result of our experiments,
we observed that without any prior knowledge all the models including the
common deep learning algorithms. An interesting characteristic of the
problem is that, it is a combination of two nonlinear sub-tasks. We were
able to solve this task by providing intermediate level hints to the
architecture and by changing the architecture. Our findings suggest that
deep learning algorithms can have difficulty to learn high level abstract
tasks due to the inherent optimization issues.

We considered a novel activation function for deep neural networks
called [image:
L_p] norm pooling. The unique characteristic of that activation function is
that it is a pooling activation function that is learned as part of the
learning process of the norm-pooling. The learning of [image: p]
values in [image:
L_p] can help to generalize into other types of norm pooling methods.

Finally I propose new approaches and problems for the future work,
specifically the tasks that involves sequential learning and structured
prediction such as natural language processing. The problems addressed in
this thesis are mainly a consequence of difficult optimization and might be
solved via cognitively inspired algorithms with mathematical foundations in
the machine learning literature.

Best,

-- 
Caglar GULCEHRE
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: https://webmail.iro.umontreal.ca/mailman/private/lisa_seminaires/attachments/20140117/9a805a6f/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires