[Lisa_teatalk] [Lisa_labo] Tea Talk 1 Aug Friday @13.00 AA3195 by Yoshua Bengio

Guillaume Alain guillaume.alain.umontreal at gmail.com
Fri Aug 1 11:49:08 EDT 2014


One easy option is to have the speaker wear the kind of
microphone/headphones combination that people use to talk on their phone.
Then you can use any phone's voice recording function to record the audio
and it should do a good job.


On Fri, Aug 1, 2014 at 11:45 AM, Sina Honari <sina.honari at gmail.com> wrote:

> another mobile or a laptop can be used next to Yoshua for just
> recording the voice. If the camera is away the voice shouldn't be that
> good.
>
> On 1 August 2014 11:38, Tae-Ho Kim <ktho894 at gmail.com> wrote:
> > I'll also bring my camera as well. I've never tried recording for long
> time,
> > i hope that it works.
> >
> > On Aug 1, 2014 11:26 AM, "Kyung Hyun Cho" <cho.k.hyun at gmail.com> wrote:
> >>
> >> Tapani will record the talk with his laptop. It's not going to of super
> >> quality, but hopefully will be good enough to hear and see the talk.
> >>
> >>
> >> On Fri, Aug 1, 2014 at 10:06 AM, Kyung Hyun Cho <cho.k.hyun at gmail.com>
> >> wrote:
> >>>
> >>> Unfortunately, I don't have anything to record the talk with. Is there
> >>> anyone else at the lab who has happened to bring their camcorder or
> camera
> >>> that can record the talk?
> >>>
> >>>
> >>> On Fri, Aug 1, 2014 at 9:51 AM, KyoungGu Woo <epigramwoo at gmail.com>
> >>> wrote:
> >>>>
> >>>> That's a nice idea.
> >>>> I would also be benefited a lot.
> >>>>
> >>>> Kyoung-Gu
> >>>>
> >>>> 2014. 8. 1. 오전 9:37에 "Pierre Luc Carrier"
> >>>> <carrier.pierreluc at gmail.com>님이 작성:
> >>>>
> >>>>> I cannot make it but I would be very interested in seeing this
> >>>>> tea-talk. If others are in the same situation, perhaps we could look
> into
> >>>>> recording this tea-talk like we did with guillaume's. I do not know
> if we
> >>>>> have the material to do this in the lab, though.
> >>>>>
> >>>>> Pierre Luc
> >>>>>
> >>>>>
> >>>>> 2014-07-31 17:11 GMT-04:00 Yoshua Bengio <yoshua.bengio at gmail.com>:
> >>>>>>
> >>>>>> Please ignore the attached pdf, it is a very old version. The arxiv
> >>>>>> version is much better, with many mistakes fixed:
> >>>>>>
> >>>>>>    http://arxiv.org/abs/1407.7906
> >>>>>>
> >>>>>> In the future, of course, the svn version will always be the latest
> >>>>>> one (articles/2014/targetprop).
> >>>>>>
> >>>>>> -- Yoshua
> >>>>>>
> >>>>>>
> >>>>>> On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho <
> cho.k.hyun at gmail.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>> Dear all,
> >>>>>>>
> >>>>>>> We will have a tea talk this Friday by Prof. Yoshua Bengio. See
> below
> >>>>>>> for the details and the attached paper.
> >>>>>>>
> >>>>>>> Hope to see many of you there!
> >>>>>>> - Cho
> >>>>>>>
> >>>>>>> ===
> >>>>>>> - Speaker: Prof. Yoshua Bengio (University of Montreal)
> >>>>>>> - Date and Time: 1 Aug 2014 @13.00
> >>>>>>> - Place: AA3195
> >>>>>>> - Title: How Auto-Encoders Could Provide Credit Assignment in Deep
> >>>>>>> Networks via Target Propagation
> >>>>>>> - Abstract:
> >>>>>>> In this paper we propose to exploit reconstruction as a layer-local
> >>>>>>> training signal for deep learning, be it generative or
> discriminant, single
> >>>>>>> or multi-modal, supervised, semi-supervised or unsupervised,
> feedforward or
> >>>>>>> recurrent. Reconstructions can be propagated in a form of target
> propagation
> >>>>>>> playing a role similar to back-propagation but helping to reduce
> the
> >>>>>>> reliance on back-propagation in order to perform credit assignment
> across
> >>>>>>> many levels of possibly strong non-linearities (which is difficult
> for
> >>>>>>> back-propagation). A regularized auto-encoder tends produce a
> reconstruction
> >>>>>>> that is a more likely version of its input, i.e., a small move in
> the
> >>>>>>> direction of higher likelihood. By generalizing gradients, target
> >>>>>>> propagation may also allow to train deep networks with discrete
> hidden
> >>>>>>> units. If the auto-encoder takes both a representation of input
> and target
> >>>>>>> (or of any side information) in input, then its reconstruction of
> input
> >>>>>>> representation provides a target towards a representation that is
> more
> >>>>>>> likely, conditioned on all the side information. A deep
> auto-encoder
> >>>>>>> decoding path generalizes gradient propagation in a learned way
> that can
> >>>>>>> thus handle not just infinitesimal changes but larger, discrete
> changes,
> >>>>>>> hopefully allowing credit assignment through a long chain of
> non-linear
> >>>>>>> operations. For this to work, each layer must be a good denoising
> or
> >>>>>>> regularized auto-encoder itself. In addition to each layer being a
> good
> >>>>>>> auto-encoder, the encoder also learns to please the upper layers by
> >>>>>>> transforming the data into a space where it is easier to model by
> them,
> >>>>>>> flattening manifolds and disentangling factors. The motivations and
> >>>>>>> theoretical justifications for this approach are laid down in this
> paper,
> >>>>>>> along with conjectures that will have to be verified either
> mathematically
> >>>>>>> or experimentally.
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> Lisa_labo mailing list
> >>>>>>> Lisa_labo at iro.umontreal.ca
> >>>>>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> Lisa_labo mailing list
> >>>>>> Lisa_labo at iro.umontreal.ca
> >>>>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
> >>>>>>
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Lisa_labo mailing list
> >>>>> Lisa_labo at iro.umontreal.ca
> >>>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
> >>>>>
> >>>>
> >>>> _______________________________________________
> >>>> Lisa_labo mailing list
> >>>> Lisa_labo at iro.umontreal.ca
> >>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
> >>>>
> >>>
> >>
> >>
> >> _______________________________________________
> >> Lisa_labo mailing list
> >> Lisa_labo at iro.umontreal.ca
> >> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
> >>
> >
> > _______________________________________________
> > Lisa_labo mailing list
> > Lisa_labo at iro.umontreal.ca
> > https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
> >
> _______________________________________________
> Lisa_labo mailing list
> Lisa_labo at iro.umontreal.ca
> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_teatalk/attachments/20140801/ca6c265e/attachment-0001.html 


More information about the Lisa_teatalk mailing list