[Lisa_teatalk] Tea Talk 6 Aug Wednesday @13.00 AA3195 by Dr. Peter Sunehag

Kyung Hyun Cho cho.k.hyun at gmail.com
Wed Aug 6 09:28:21 EDT 2014


This is a reminder of the tea talk today at 13.00 by Dr. Peter Sunehag.

Also, this Friday we will have a practice talk by Amjad for his
presentation at the CIFAR Summer School next week (if there's time left, I
might also do my practice talk.) For this talk, I will send an announcement
again later today.

Hope to see many of you today at the tea talk!
- Cho




On Mon, Aug 4, 2014 at 7:19 PM, Kyung Hyun Cho <cho.k.hyun at gmail.com> wrote:

> Dear all,
>
> We will have a talk this Wednesday by Dr. Peter Sunehag. See below for the
> details and the attached paper
>
> Hope to see many of you there!
> - Cho
>
> - Speaker: Dr. Peter Sunehag (Australian National University,
> http://people.cecs.anu.edu.au/user/1446)
> - Date and Time: 6 Aug 2014 @13.00
> - Place: AA3195
> - Title: Optimism as a fundamental decision-making principle - Abstract:
> We discuss the usefulness and rationality of optimism in general
> sequential decision making. Humans are often optimistically biased and
> optimists are achieving more of their ambitions than more rational people.
> We provide a mathematical analysis of general optimistic agents and
> identify how they can be better or worse than strictly rational agents.
> These agents select the most optimistic among the still plausible
> hypotheses from a class. Further, we discuss a milder form of optimism in
> the case of continuously parameterized classes. We refer to this setting as
> reward-modulated inference, a framework that includes recent models for
> synaptic weight updates in neuroscience as a special case.  At the center
> is a trade-off between assigning high probability to likable or likely
> events.  A nice consequence is that we can formulate generative and
> discriminative learning as endpoints of a continuum. Finally, I present
> some first experiments using autoencoders for the classical MNIST task
> showing a very simple way of getting ok but not state-of-the art accuracy.
> The aim is not to optimize narrow tasks maximally but to be able to achieve
> the law of effect, making choices with a frequency proportional to past
> rewards in situations deemed similar. In really complex tasks this can be a
> sufficient objective and one that humans and animals often settle for.
> Also, the strategy provides suitable exploration as well as robustness to
> change.
>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140806/0b1d2ea6/attachment.html 


More information about the Lisa_teatalk mailing list