[Lisa_seminaires] Fwd: UdeM-McGill-mPrime machine learning seminar Fri. Nov. 11th @ 14h00, University of Montreal, room AA3195.

Guillaume Desjardins guillaume.desjardins at gmail.com
Jeu 10 Nov 16:09:13 EST 2011


A friendly reminder for tomorrow's mPrime talk by Vincent Gripon. As
always, the talk will be held in AA3195 (Pavillon Andre-Aisenstadt,
Université de Montréal) !

---------- Forwarded message ----------
From: Guillaume Desjardins <guillaume.desjardins at gmail.com>
Date: Wed, Nov 9, 2011 at 12:47 PM
Subject: UdeM-McGill-mPrime machine learning seminar Fri. Nov. 11th @
14h00, University of Montreal, room TBD.
To: lisa_seminaires at iro.umontreal.ca


A UdeM-McGill-mPrime machine learning seminar will be held this
Friday, Nov. 11th. The talk given by Vincent Gripon, will take place
from 14h00-15h00. The talk will be held at the Université de Montréal,
room number to be confirmed shortly. Hope to see you there !

Title: Networks of neural cliques
Speaker: Vincent Gripon

Abstract:

We propose and develop an original model of associative memories
relying on coded neural networks. Associative memories are devices
able to learn messages then to retrieve them from part of their
contents. The state-of-the-art model in terms of efficiency (ratio of
the amount of bits learned to the amount of bits used) is the
Hop field Neural Network, whose learning diversity - the number of
messages it can store - is lower than  $\frac{n}{2 \log(n)}$ where n
is the number of neurons in the network.

Our work consists of using error correcting coding and decoding
techniques, more precisely distributed codes, to considerably increase
the performance of associative memories. To achieve this, we introduce
original codes whose code-words rely on neural cliques. We show that,
combined with sparse local codes, these neural cliques offer a
learning diversity which grows quadratically with the number of
neurons.

The observed gains come from the use of sparsity at several levels:
learned messages length is much shorter than n, and they only use part
of the available material both in terms of neurons and connections.
The learning process is therefore local, contrary to the Hop field
model. Moreover, these memories offer an efficiency nearly optimal.
Therefore they appear to be a very interesting alternative to
classical indexed memories.

Beside the performance aspects, the proposed model offer much greater
biological plausibility than the Hop field one. Indeed, the concepts
of neural cliques, winner-take-all, or even temporal synchronization
that we introduce into our networks match recent observations found in
the neurobiological literature. Moreover, since neural cliques are
intertwined by their vertices and/or their connections, the proposed
model offers new perspectives for the design of cognitive machines
able to cross pieces of information in order to produce new ones.


Plus d'informations sur la liste de diffusion Lisa_seminaires