[Lisa_seminaires] Fwd: Prochain Colloque du DIRO jeudi prochain, 18 janvier

Yoshua Bengio yoshua.umontreal at gmail.com
Jeu 11 Jan 13:14:43 EST 2018


Hi all,

We'll have a special guest next Thursday, Peter Railton, a philosopher
teaching at U. Michichigan, Ann Arbor. He will talk on Morality and AI,
note that it's not in the usual tea-talk slot but during the department
colloquium (3pm for the munchies, 3:30pm for the talk). I heard him talk
last Fall and I was quite impressed (about how well he understand current
deep-learning based AI from a philosophical standpoint).

If you are interested in chatting with him during his visit, please contact
Michael (in cc) and he will set up a meeting schedule.

-- Yoshua

---------- Forwarded message ----------
From: Gilles Brassard <brassard at iro.umontreal.ca>
Date: 2018-01-11 5:28 GMT-05:00
Subject: Prochain Colloque du DIRO jeudi prochain, 18 janvier
To: seminaires at iro.umontreal.ca
Cc: Yoshua Bengio <yoshua.umontreal at gmail.com>


Bonjour tout le monde

Le premier Colloque du DIRO de la session (et donc de l'année 2018)
aura lieu jeudi prochain. Ne ratez pas cette chance !  :-)

Conférencier : Peter Railton

Affiliation : University of Michigan, Ann Arbor

Endroit : Z330 Pavillon Claire-McNicoll
          Université de Montréal

Quand : le 18 janvier 2018, 15h30 (café et biscuits à partir de 15h)

Titre : Moral Learning and Artificial Intelligence
(la conférence sera présentée en anglais)

Résumé : Traditional approaches to moral development have emphasized the
explicit teaching of norms, e.g., via parental instruction, or the
acquisition of behavioral dispositions by “social learning”, e.g., via
infant imitation and modeling of observed behaviors, or progression through
a fixed set of developmental “stages”. But what if we understood moral
learning as closer to causal learning and the development of commonsense
physics? Developmental evidence suggests that infants early on begin to
model their physical environment and its possibilities (Gopnik & Schulz,
2004), using observation but receiving very limited explicit instruction or
external reinforcement. Similarly, there is evidence that infants early on
begin learning a kind of commonsense psychology that enables them to model
others’ behavior in terms of intentional states, once again, using
observation but very limited explicit instruction or external reinforcement
(Wellman, 2014). These internal models enable infants to interact
reasonably successfully with their physical and social environment even if
they are unable to articulate the causal or psychological principles
involved—the knowledge underlying these capacities is therefore
generalizable despite being implicit, and so is spoken of as intuitive.
Internal models are not limited to causal and predictive information,
however, but also appear to encode evaluative information, including
evaluation of possible actions or third-party social interactions for such
features as helpfulness, harm, knowledgeability, and trustworthiness
(Hamlin et al., 2011; Doebel & Koenig, 2013). When combined with an
implicit capacity to empathically simulate the mental states of others,
these evaluative capacities can underwrite a kind of intuitive learning of
commonsense morality. Such learning occurs without much explicit
instruction in moral principles, yet with a capacity to generalize and with
some degree of moral autonomy—so that by age 3-4, infants will resist
conforming to imposed rules that involve harm or unfairness toward others
(Turiel, 2002).

To be genuinely intelligent, artificial systems will need to possess the
kinds of intuitive knowledge involved in commonsense physics and
psychology. And to be both autonomous and trustworthy, artificial systems
will need to be able to evaluate situations, actions, and agents in the
terms of such categories of commonsense morality as helpfulness, harm,
knowledgeability, and trustworthiness. Deep learning approaches suggest how
intuitive knowledge of the kind involved in predictive learning might be
acquired and represented, without being “programmed in” or explicitly
taught. How might further developments of these approaches make possible
the acquisition of intuitive evaluative knowledge of the kind involved in
commonsense epistemic or moral assessment?

Biographie : Peter Railton is the G.S. Kavka Distinguished University
Professor in the Department of Philosophy at the University of Michigan,
Ann Arbor. His main research has been in ethics and the philosophy of
science, focusing especially on questions about the nature of objectivity,
value, norms, and explanation. Recently, he has also begun working in
aesthetics, moral psychology, and the theory of action, and on the bearing
of empirical research in these areas. Among his publications are Facts,
Values, and Norms (Cambridge), a collection of some of his papers in ethics
and meta-ethics, and Homo Prospectus (Oxford), a joint project with
psychology and neuroscience on basic mental architecture. He has been a
visiting professor at Berkeley and Princeton, and he has received
fellowships from the ACLS, the NEH, and the Guggenheim Foundation. He is a
former President of the American Philosophical Association (Central
Division) and is a Fellow of the American Academy of Arts and Sciences.

Venez nombreux !

        - Gilles et Yoshua, responsables du Colloque
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180111/53e1758b/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires