[Lisa_seminaires] [mila-tous] Re: [Tea Talk] Yoshua Bengio (Mila) Fri 12 April 2019 10h30 Mila Auditorium

Simon Lacoste-Julien slacoste at iro.umontreal.ca
Ven 26 Avr 15:47:49 EDT 2019


On Thu, Apr 25, 2019 at 5:31 PM Simon Lacoste-Julien
<slacoste at iro.umontreal.ca> wrote:
>
> Link FYI:
> https://sites.google.com/lisa.iro.umontreal.ca/tea-talk-recordings

FYI this requires mila.quebec credentials to access...

-S

>
> -S
>
> On Wed, Apr 24, 2019 at 5:38 PM Rim Assouel <rim.assouel at gmail.com> wrote:
> >
> > Yes, you can find the recording on the Mila youtube channel or the tea talks website!
> >
> > Cheers,
> > The Tea Talk Team
> >
> > Le mercredi 24 avril 2019, Rémi LP <remi.lp.17 at gmail.com> a écrit :
> >>
> >> Was this talk recorded ?
> >>
> >> On Apr 12, 2019, at 09:59, Rim Assouel <rim.assouel at gmail.com> wrote:
> >>
> >> Reminder that this happens in 30 minutes :)
> >>
> >> Le lundi 8 avril 2019, Rim Assouel <rim.assouel at gmail.com> a écrit :
> >>>
> >>> This week we have Yoshua Bengio from Mila giving a talk on Meta-transfer learning for factorizing representations and knowledge for AI at 10h30 in room Mila Auditorium.
> >>>
> >>> Will this talk be streamed ? yes
> >>>
> >>> See you there!
> >>> The Tea Talk Team
> >>>
> >>> TITLE Meta-transfer learning for factorizing representations and knowledge for AI
> >>>
> >>> ABSTRACT
> >>> Whereas machine learning theory has focused on generalization to examples from the same distribution as the training data, better understanding of the transfer scenarios where the observed distribution changes often in the lifetime of the learning agent is important, both for robust deployment and to achieve a more powerful form of generalization which humans seem able to enjoy and which seem necessary for learning agents. Whereas most machine learning algorithms and architectures can be traced back to assumptions about the training distributions, we also need to explore assumptions about how the observed distribution changes. We propose that sparsity of change in distribution, when knowledge is represented appropriately, is a good assumption for this purpose, and we claim that if that assumption is verified and knowledge represented appropriately, it leads to fast adaptation to changes in distribution, and thus that the speed of adaptation to changes in distribution can be used as a meta-objective which can drive the discovery of knowledge representation compatible with that assumption. We illustrate these ideas in causal discovery: is some variable a direct cause of another? and how to map raw data to a representation space where different dimensions correspond to causal variables for which a clear causal relationship exists? We propose a large research program in which this non-stationarity assumption and meta-transfer objective is combined with other closely related assumptions about the world embodied in a world model, such as the consciousness prior (the causal graph is captured by a sparse factor graph) and the assumption that the causal variables are often those agents can act upon (the independently controllable factors prior), both of which should be useful for agents which plan, imagine and try to find explanations for what they observe.
> >>>
> >>> BIO
> >>> Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS board and co-founder and general chair for the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.
> >>
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups "MILA Tous" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an email to mila-tous+unsubscribe at mila.quebec.
> >>
> >>
> > --
> > You received this message because you are subscribed to the Google Groups "MILA Tous" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to mila-tous+unsubscribe at mila.quebec.
>
>
>
> --
> Simon Lacoste-Julien
> Assistant Professor & CIFAR Fellow & CCAI chair
> Department of Computer Science and Operations Research
> Université de Montréal & Mila
> http://www.iro.umontreal.ca/~slacoste



-- 
Simon Lacoste-Julien
Assistant Professor & CIFAR Fellow & CCAI chair
Department of Computer Science and Operations Research
Université de Montréal & Mila
http://www.iro.umontreal.ca/~slacoste


Plus d'informations sur la liste de diffusion Lisa_seminaires