[Lisa_teatalk] [Lisa_labo] Tea Talk Tomorrow 28 May @13:00 AA3195 by Guillaume Alain

Nicolas Boulanger-Lewandowski nicolas_boulanger at hotmail.com
Tue May 27 15:58:02 EDT 2014


Also, FFTs typically speed up convolutions only for large filters (receptive fields). For small (local) filters, you have to zero pad them to the input size to use FFTs, so you could essentially increase the size of your filters for free, while a direct computation takes proportionally longer. I didn't read the paper but I wouldn't be surprised if it rehashes some of those same ideas.

Nicolas

Date: Tue, 27 May 2014 14:04:02 -0400
From: yoshua.bengio at gmail.com
To: guillaume.alain.umontreal at gmail.com
CC: lisa_teatalk at iro.umontreal.ca; lisa_labo at iro.umontreal.ca; cho.k.hyun at gmail.com
Subject: Re: [Lisa_labo] Tea Talk Tomorrow 28 May @13:00 AA3195 by Guillaume	Alain

Correction.
There is a good reason why it was not done before. When I was working on conv nets with Yann LeCun and Patrice Simard in the early 90's some people had tried it but there was no gain. The main reason is that we only had very few channels (like 1 in the input and 5 in the first layer) in the lower layers (where most of the computation took place). When the number of channels becomes large, the advantage of doing the FFT greatly increases because the log(n) overhead can be compensated by the NxN re-use of it through all the NxN channel combinations (N channels in the previous layer times N channels in the next). Also, I don't remember that somebody thought about the advantage brought by this re-use, but I was not involved in this directly so I am not sure.

-- Yoshua


On Tue, May 27, 2014 at 1:49 PM, Guillaume Alain <guillaume.alain.umontreal at gmail.com> wrote:

*spoiler alert* 
Ian is correct in pointing this out, but that's the main point of my talk.

The FFT is not new at all. Everybody knew about it.

It's just that nobody felt like revisiting the idea by actually implementing it and taking measurements.

On Tue, May 27, 2014 at 1:46 PM, Ian Goodfellow <goodfellow.ian at gmail.com> wrote:


FFTs have been used to speed up convolution since at least the 1960s. I'm not sure why this isn't done more in the neural nets community, but this year's ICLR paper is not the place where the idea was first proposed.




2014-05-27 13:21 GMT-04:00 Kyung Hyun Cho <cho.k.hyun at gmail.com>:



Dear all,

Guillaume Alain will tell us about the recently proposed technique of using FFT for fast convolutional operation tomorrow. See below for the detail.

See you there!





- Cho

============

- Speaker: Guillaume Alain
- Date and Time: 28 May 2014 @ 13.00
- Place: AA3195
- Paper to be Discussed: Michael Mathieu, Mikael Henaff, Yann LeCun. Fast Training of Convolutional Networks through FFTs. ICLR 2014







_______________________________________________

Lisa_labo mailing list

Lisa_labo at iro.umontreal.ca

https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo





_______________________________________________

Lisa_labo mailing list

Lisa_labo at iro.umontreal.ca

https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo





_______________________________________________

Lisa_labo mailing list

Lisa_labo at iro.umontreal.ca

https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo





_______________________________________________
Lisa_labo mailing list
Lisa_labo at iro.umontreal.ca
https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140527/368eed1e/attachment-0001.html 


More information about the Lisa_teatalk mailing list