[Lisa_seminaires] Tea talk Wednesday 26th @13:00 AA3195 by Mehdi

Razvan Pascanu r.pascanu at gmail.com
Lun 24 Mar 02:04:33 EDT 2014


Hi all,

 This Wednesday, 26th March, at 13:00 room AA3195 Mehdi Mirza is going to
present (Ian's) and his submission to ICLR 2014 which was accepted in the
conference track. See title and abstract bellow.

Jessica Thompson will also present after Mehdi. She will introduce herself
to the lab and talk about her work.

I hope to see many of you there,
Best,
Razvan


An Empirical Investigation of Catastrophic Forgeting in Gradient-Based
Neural Networks
Authors
Ian J Goodfellow, Mehdi Mirza, Xia Da, Aaron Courville, Yoshua Bengio

Catastrophic forgetting is a problem faced by many machine learning models
and algorithms. When trained on one task, then trained on a second task,
many machine learning models "forget" how to perform the first task. This
is widely believed to be a serious problem for neural networks. Here, we
investigate the extent to which the catastrophic forgetting problem occurs
for modern neural networks, comparing both established and recent
gradient-based training algorithms and activation functions. We also
examine the effect of the relationship between the first task and the
second task on catastrophic forgetting. We find that it is always best to
train using the dropout algorithm--the dropout algorithm is consistently
best at adapting to the new task, remembering the old task, and has the
best tradeoff curve between these two extremes. We find that different
tasks and relationships between tasks result in very different rankings of
activation function performance. This suggests the choice of activation
function should always be cross-validated.
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: https://webmail.iro.umontreal.ca/mailman/private/lisa_seminaires/attachments/20140324/fe3565f4/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires