Thanks for the links, Yoshua and Caglar. Here are my slides.
2014/1/16 Çağlar Gülçehre ca9lar@gmail.com
As a related note, I think it is also possible to learn parse trees (hence the tree data structure) using reinforcement learning for images and natural language,
For images: http://vision.mas.ecp.fr/Personnel/teboul/files/cvpr11_teboul.pdf
For natural language processing, I think Hal Daume published a few papers with imitation learning.
But these examples are not using neural networks.
On Thu, Jan 16, 2014 at 2:09 PM, Yoshua Bengio yoshua.bengio@gmail.comwrote:
Here is the paper I mentioned during Ian's presentation about training a recurrent net to exploit a push-pop stack:
http://books.nips.cc/papers/files/nips02/0380.pdf
It was not Mike Mozer but Lee Giles.
-- Yoshua
On Mon, Jan 13, 2014 at 1:04 PM, Razvan Pascanu r.pascanu@gmail.comwrote:
Hi all,
First of all the schedule has changed, and tea talks will be Wednesdays from 13-14 from now on.
There is an exception this week. We will have Ian presenting this **Thursday from 13:00 to 14:00**. Speaker: Ian Goodfellow
Title: Ian's ideas for research projects
Abstract:
I'll throw out a few of my recent ideas for research projects, ranging from easy to idealistic: -Restricted maxout units -Regularizing piecewise linear nets to change pieces rarely -Sparsely connected recurrent nets -Trajectory optimization for recurrent nets -"Cognitive agency" and how you can use it to do things like discrete-state nets, dynamically structured nets, and nets that interact with data structures like stacks and tapes
I hope to see many of you there. And please register for giving tea talks.
Razvan
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
-- Caglar GULCEHRE
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo