[Lisa_seminaires] 21 Jan 2015: Talk by Adriana Romero on Wednesday at 13.30 (AA3195)

Kyung Hyun Cho cho.k.hyun at gmail.com
Dim 18 Jan 14:28:19 EST 2015


Dear all,

Adriana Romero, a visiting researcher at the lab from Barcelona (!), will
tell us about her latest work done here. She will teach us how to raise our
neural net to become a leaner, but taller network.

The talk will be at the usual place AA3195 starting from 13.30 on Wednesday
(21 Jan).

Hope to see many of you there!
- Cho

===
- *Speaker*: Adriana Romero (University of Barcelona)
- *Date/Time*: 13.30 - 14.30, 21 Jan 2015
- *Place*: AA3195
- *Title*: FitNets: Hints for Thin Deep Nets
- *Abstract*:
While depth tends to improve network performances, it also makes
gradient-based training more difficult since deeper networks tend to be
more non-linear. The recently proposed knowledge distillation approach is
aimed at obtaining small and fast-to-execute models, and it has shown that
a student network could imitate the soft output of a larger teacher network
or ensemble of networks. We extend this idea to allow the training of a
student that is deeper and thinner than the teacher, using not only the
outputs but also the intermediate representations learned by the teacher as
hints to improve the training process and final performance of the student.
Because the student intermediate hidden layer will generally be smaller
than the teacher's intermediate hidden layer, additional parameters are
introduced to map the student hidden layer to the prediction of the teacher
hidden layer. This allows one to train deeper students that can generalize
better or run faster, a trade-off that is controlled by the chosen student
capacity. For example, on CIFAR-10, a deep student network with almost 10.4
times less parameters outperforms a larger, state-of-the-art teacher
network.

- *Bio*:
I am currently a PhD student at University of Barcelona, advised by Dr.
Carlo Gatta, working on deep learning models and their applications to
computer vision. I graduated form Universitat Autònoma de Barcelona in 2010
as Computer Engineer and from Universitat Politècnica de Catalunya in 2012
as M.Sc. in Artificial Intelligence. My previous work was focused on
unsupervised sparse feature learning algorithms to train both shallow and
deep networks. In August 2014, I joined LISA lab for 6 months, where I've
been working with Prof. Yoshua Bengio on training thin and deep student
networks from shallower and wide teacher networks.
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20150118/28691aef/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires