Bonjour,
Je donne un séminaire pour une audience non-spécialisée (informatique et génie électrique) dans deux semaines à McGill, sur la question de l'intelligence artificielle via l'apprentissage machine et des problèmes difficiles d'optimisation qui sont impliqués là-dedans.
L'annonce est là: http://www.cse.mcgill.ca/Fall07/YB.pdf
Friday Nov 16 at 14:30 in McConnell Engineering Bldg Room 103
Learning Deep Architectures for AI
Theoretical results in complexity theory of circuits and in non-parametric statistics strongly suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need "deep architectures", which are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers. Searching the parameter space of deep architectures appears to be a fundamentally difficult optimization task, so approximate numerical optimization schemes are called for. Learning algorithms such as those for Deep Belief Networks have recently been proposed which make a dent in this difficult optimization task, beating the state-of-the-art in certain areas. This talk discusses the motivations and principles regarding learning algorithms for deep architectures and in particular for those based on unsupervised learning such as Deep Belief Networks, using as building blocks single-layer models such as Restricted Boltzmann Machines.