Also, FFTs typically speed up convolutions only for large filters (receptive fields). For small (local) filters, you have to zero pad them to the input size to use FFTs, so you could essentially increase the size of your filters for free, while a direct computation takes proportionally longer. I didn't read the paper but I wouldn't be surprised if it rehashes some of those same ideas.
Date: Tue, 27 May 2014 14:04:02 -0400
From: yoshua.bengio@gmail.com
To: guillaume.alain.umontreal@gmail.com
CC: lisa_teatalk@iro.umontreal.ca; lisa_labo@iro.umontreal.ca; cho.k.hyun@gmail.com
Subject: Re: [Lisa_labo] Tea Talk Tomorrow 28 May @13:00 AA3195 by Guillaume Alain
Correction.
There is a good reason why it was not done before. When I was working on conv nets with Yann LeCun and Patrice Simard in the early 90's some people had tried it but there was no gain. The main reason is that we only had very few channels (like 1 in the input and 5 in the first layer) in the lower layers (where most of the computation took place). When the number of channels becomes large, the advantage of doing the FFT greatly increases because the log(n) overhead can be compensated by the NxN re-use of it through all the NxN channel combinations (N channels in the previous layer times N channels in the next). Also, I don't remember that somebody thought about the advantage brought by this re-use, but I was not involved in this directly so I am not sure.
-- Yoshua
_______________________________________________
Lisa_labo mailing list
Lisa_labo@iro.umontreal.ca
https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo