Dear all,
Dong-Hyun will update us on the progress in using target-prop this week at the tea talk. The talk will be held at the usual place AA3195 and usual time of 13.30.
Hope many will attend the talk! - Cho
=== - Speaker: Dong-Hyun Lee - Title: Training supervised deep network and an auto-encoder through target propagation without back-prop - Abstract: I'll present how to train supervised network through target propagation. This new technique suggested by Yoshua can make it possible to train deep network without back propagation thanks to layer-wise target matching losses and target propagation through layers. Our results are comparable to back-propagation on MNIST. And we can train discrete networks directly which is impossible to train using ordinary back-propagation because this technique doesn't use any derivatives for propagating training signals. Using this, we can also train an auto-encoder without back-prop.
Afficher les réponses par date
Dear all,
I'll present "Training supervised deep network through target propagation without back-prop" at AA3195 and 1:30pm.
This is slides for tea talk today. Unfortunately, I decided not to present backprop-free auto-encoder in order not to spend too long time. Feel free to ask it in any time.
Dong-Hyun.
2014-10-28 6:45 GMT-04:00 Kyung Hyun Cho cho.k.hyun@gmail.com:
Dear all,
Dong-Hyun will update us on the progress in using target-prop this week at the tea talk. The talk will be held at the usual place AA3195 and usual time of 13.30.
Hope many will attend the talk!
- Cho
===
- Speaker: Dong-Hyun Lee
- Title: Training supervised deep network and an auto-encoder through
target propagation without back-prop
- Abstract:
I'll present how to train supervised network through target propagation. This new technique suggested by Yoshua can make it possible to train deep network without back propagation thanks to layer-wise target matching losses and target propagation through layers. Our results are comparable to back-propagation on MNIST. And we can train discrete networks directly which is impossible to train using ordinary back-propagation because this technique doesn't use any derivatives for propagating training signals. Using this, we can also train an auto-encoder without back-prop.
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo