[Lisa_seminaires] [Tea Talk] Alex Ororbia (Penn State) Fri Mar 23 10:30AM AA1360

Michael Noukhovitch mnoukhov at gmail.com
Ven 23 Mar 10:21:38 EDT 2018


Reminder this is in 10!

On Tue, Mar 20, 2018, 18:08 Michael Noukhovitch <mnoukhov at gmail.com> wrote:

> This week we have *Alexander G. Ororbia II* from *Penn State* giving a
> talk on *Friday March 23* at *10:30AM* in room *AA1360*.
>
> *Want to meet with Alex?* Fill out this Doodle
> <https://doodle.com/poll/6nkharwxw8ya26ys> and we'll find a spot!
>
> This talk should present some new ideas that will really challenge us to
> never stop learning, so you should definitely come!
> Michael
>
> *TITLE *Adaptation in the Face of Evolving Distributions: Towards
> Continual Learning of Connectionist Architectures
>
> *KEYWORDS *continual/lifelong/meta learning, optimization,
> biological-inspired RNN
>
>
> *ABSTRACT*It is a common statistical learning practice to build models on
> (very large) static datasets of identically and independently distributed
> samples. But what if the distribution to be learned is dynamic and samples
> are drawn from it over time? In this setting, these traditional learning
> approaches no longer directly apply. Motivated by this issue, we must look
> to continual, or lifelong learning, and the nature of systems that adapt
> themselves to such distributions. With the goal of creating robust and
> scalable continual learning systems, my work can be decomposed into two
> main threads: 1) creating architectures with longer-term memory and 2)
> developing better learning algorithms.
>
> In this talk, I will focus on the second thread, specifically on my
> efforts in developing alternatives to back-propagation of errors.
> Specifically, I will present, Local Representation Alignment (LRA), a
> training procedure that is much less sensitive to bad initializations, does
> not require modifications to the network architecture, can be readily
> adapted to networks with highly nonlinear and discrete-valued activation
> functions and stochastic sampling operations, and can even train networks
> with various kinds of activation functions from zero initialization.
> Results on classification benchmarks will be discussed and recent results
> in the lifelong learning setting, where the model must learn across tasks,
> will be presented.
>
> I will finally talk about my proposed neural architecture, the Temporal
> Neural Coding Network (TNCN), which builds on concepts from predictive
> coding, an important theory of the brain that has gained influence in
> cognitive science. Using the proposed Discrepancy Reduction learning
> procedure, which can be viewed as a special variation of LRA, the TNCN can
> learn from sequential data, such as videos or text documents. More
> importantly, initial results show promise that the TNCN can be competitive
> with models that learn using back-propagation through time--the popular,
> biologically implausible algorithm used to "unroll" and train recurrent
> neural networks.
>
> *BIO*
> Alex is currently a Ph.D. student at The Pennsylvania State University in
> Information Sciences & Technology. In 2013, he obtained his Bachelor of
> Science degree in Computer Science & Engineering at Bucknell University,
> minoring in Philosophy and Mathematics. The focus of his work is on
> lifelong learning--an important and challenging open problem in machine
> learning. He studies representation learning and draws inspiration from
> ideas in cognitive science and neuroscience. Alex's mission is to develop
> the learning procedures and architectures needed to create general-purpose,
> adaptive agents that can learn from data of multiple modalities and operate
> in messy, non-stationary environments.
>
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20180323/430a4bc7/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires