This is a reminder for tomorrow's talk!
On Fri, Oct 1, 2010 at 10:06, Dumitru Erhan erhandum@iro.umontreal.cawrote:
The UdeM-McGill-MITACS machine learning seminar series is back with an exciting fall schedule. Next week's seminar (see http://www.iro.umontreal.ca/article.php3?id_article=107&lang=en):
Learning Spatial and Transformational Invariants for Visual Representation
by Charles Cadieu Redwood Center for Theoretical Neuroscience University of California, Berkeley
Location: Pavillon André-Aisenstadt (UdeM), room AA-3195 Time: Thursday, October 7, 14:00
Abstract: Learning abstract, invariant properties of the visual word is a key attribute of biological vision systems. I will describe a hierarchical, probabilistic model that learns to extract invariant spatial structure and invariant motion structure from movies of the natural environment. The first layer in the model produces a sparse decomposition of local edge and motion structure. This decomposition is achieved through a complex-valued sparse coding model in which amplitudes represent the presence of edge structure at specific positions, orientations, and spatial scales, and phases represent the precise positions of edges and how image structure changes through time. This decomposition into amplitude and phase exposes statistical dependencies that the top layer in the model captures as two types of invariances: spatial and transformational. The spatial invariants are a sparse code of the patterns in the first layer amplitude components. They code a rich set of multi-scale edges, textures, and texture defined edge boundaries. The transformational invariants are a sparse representation of patterns of change in the first layer phase components. They learn a multi-scale code of motion, spanning local and global motions, and are capable of learning complex motions such as zooming, rotation, and deformation. Besides extracting abstract, invariant properties of the visual world, I will show how the hierarchical model provides a concrete model of cortical feedback that is useful for perception under noisy or ambiguous conditions.