A reminder for today's tea-talk by Karol Gregor: 2:00pm, AA3195. See you there !
On Tue, Feb 19, 2013 at 4:51 PM, Guillaume Desjardins guillaume.desjardins@gmail.com wrote:
Hi everyone,
We have two upcoming tea-talks by Daan Wierstra and Karol Gregor of Deep Mind.
Dan Wierstra will start tomorrow (Wed. Feb 20th, @1:30pm) by providing an overview of Deep Mind and their ongoing work in deep learning. On Thursday at 2:00pm, Karol Gregor will present two of his recent papers (title and abstract below). See you there !
Title: A lattice filter model of the visual pathway Authors: Karol Gregor, Dmitri B Chklovskii
Abstract: Early stages of visual processing are thought to decorrelate, or whiten, the incoming temporally varying signals. Because the typical correlation time of natural stimuli, as well as the extent of temporal receptive fields of lateral geniculate nucleus (LGN) neurons, is much greater than neuronal time constants, such decorrelation must be done in stages combining contributions of multiple neurons. We propose to model temporal decorrelation in the visual pathway with the lattice filter, a signal processing device for stage-wise decorrelation of temporal signals. The stage-wise architecture of the lattice filter maps naturally onto the visual pathway (photoreceptors -> bipolar cells -> retinal ganglion cells -> LGN) and its filter weights can be learned using Hebbian rules in a stage-wise sequential manner. Moreover, predictions of neural activity from the lattice filter model are consistent with physiological measurements in LGN neurons and fruit fly second-order visual neurons. Therefore, the lattice filter model is a useful abstraction that may help unravel visual system function.
Title: Fast Approximations to Structured Sparse Coding and Applications to Object Classification Authors: Arthur Szlam , Karol Gregor, and Yann LeCun
Abstract: We describe a method for fast approximation of sparse coding. A given input vector is passed through a binary tree. Each leaf of the tree contains a subset of dictionary elements. The coefficients corresponding to these dictionary elements are allowed to be nonzero and their values are calculated quickly by multiplication with a precomputed pseudoinverse. The tree parameters, the dictionary, and the subsets of the dictionary corresponding to each leaf are learned. In the process of describing this algorithm, we discuss the more general problem of learning the groups in group structured sparse modeling. We show that our method creates good sparse representations by using it in the object recognition framework of [1,2]. Implementing our own fast version of the SIFT descriptor the whole system runs at 20 frames per second on 321 × 481 sized images on a laptop with a quad-core cpu, while sacrificing very little accuracy on the Caltech 101, Caltech 256, and 15 scenes benchmarks.