In reference to the problem of constant memory training of RNNs, I mentioned the forward computation of derivatives in this talk (although I didn't know the name). Here is a recent paper I encountered that proposes an approximation to this method; it might be interesting to anyone who's interested in this problem.
http://arxiv.org/abs/1507.07680
On Tue, Mar 8, 2016 at 12:16 PM, Jörg Bornschein bornj@iro.umontreal.ca wrote:
Hi,
this Friday our tea-talk will be a bit more of an brain storming session and we'll have David Krueger present and discuss some research ideas.
Title: Bayesian non-parametric discriminative freeze-thaw sequential GANs, and other half-baked research ideas Who: David Krueger When: Fri. March 11th, 14:30 Where: AA 3195
Looking forward to see you there!
j
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo