Great! I will be at the lecture hall 15 minutes prior to the tea talk. If anyone brings the recording equipment, please, stop by and test the setting with me!
On Fri, Aug 1, 2014 at 11:51 AM, Bing Xu antinucleon@gmail.com wrote:
I will take my camera. It is able to record 1.5 hour in 1080i. On Aug 1, 2014 11:49 AM, "Guillaume Alain" < guillaume.alain.umontreal@gmail.com> wrote:
One easy option is to have the speaker wear the kind of microphone/headphones combination that people use to talk on their phone. Then you can use any phone's voice recording function to record the audio and it should do a good job.
On Fri, Aug 1, 2014 at 11:45 AM, Sina Honari sina.honari@gmail.com wrote:
another mobile or a laptop can be used next to Yoshua for just recording the voice. If the camera is away the voice shouldn't be that good.
On 1 August 2014 11:38, Tae-Ho Kim ktho894@gmail.com wrote:
I'll also bring my camera as well. I've never tried recording for long
time,
i hope that it works.
On Aug 1, 2014 11:26 AM, "Kyung Hyun Cho" cho.k.hyun@gmail.com
wrote:
Tapani will record the talk with his laptop. It's not going to of
super
quality, but hopefully will be good enough to hear and see the talk.
On Fri, Aug 1, 2014 at 10:06 AM, Kyung Hyun Cho <cho.k.hyun@gmail.com
wrote:
Unfortunately, I don't have anything to record the talk with. Is
there
anyone else at the lab who has happened to bring their camcorder or
camera
that can record the talk?
On Fri, Aug 1, 2014 at 9:51 AM, KyoungGu Woo epigramwoo@gmail.com wrote: > > That's a nice idea. > I would also be benefited a lot. > > Kyoung-Gu > > 2014. 8. 1. 오전 9:37에 "Pierre Luc Carrier" > carrier.pierreluc@gmail.com님이 작성: > >> I cannot make it but I would be very interested in seeing this >> tea-talk. If others are in the same situation, perhaps we could
look into
>> recording this tea-talk like we did with guillaume's. I do not
know if we
>> have the material to do this in the lab, though. >> >> Pierre Luc >> >> >> 2014-07-31 17:11 GMT-04:00 Yoshua Bengio <yoshua.bengio@gmail.com
:
>>> >>> Please ignore the attached pdf, it is a very old version. The
arxiv
>>> version is much better, with many mistakes fixed: >>> >>> http://arxiv.org/abs/1407.7906 >>> >>> In the future, of course, the svn version will always be the
latest
>>> one (articles/2014/targetprop). >>> >>> -- Yoshua >>> >>> >>> On Thu, Jul 31, 2014 at 4:49 PM, Kyung Hyun Cho <
cho.k.hyun@gmail.com>
>>> wrote: >>>> >>>> Dear all, >>>> >>>> We will have a tea talk this Friday by Prof. Yoshua Bengio. See
below
>>>> for the details and the attached paper. >>>> >>>> Hope to see many of you there! >>>> - Cho >>>> >>>> === >>>> - Speaker: Prof. Yoshua Bengio (University of Montreal) >>>> - Date and Time: 1 Aug 2014 @13.00 >>>> - Place: AA3195 >>>> - Title: How Auto-Encoders Could Provide Credit Assignment in
Deep
>>>> Networks via Target Propagation >>>> - Abstract: >>>> In this paper we propose to exploit reconstruction as a
layer-local
>>>> training signal for deep learning, be it generative or
discriminant, single
>>>> or multi-modal, supervised, semi-supervised or unsupervised,
feedforward or
>>>> recurrent. Reconstructions can be propagated in a form of target
propagation
>>>> playing a role similar to back-propagation but helping to reduce
the
>>>> reliance on back-propagation in order to perform credit
assignment across
>>>> many levels of possibly strong non-linearities (which is
difficult for
>>>> back-propagation). A regularized auto-encoder tends produce a
reconstruction
>>>> that is a more likely version of its input, i.e., a small move
in the
>>>> direction of higher likelihood. By generalizing gradients, target >>>> propagation may also allow to train deep networks with discrete
hidden
>>>> units. If the auto-encoder takes both a representation of input
and target
>>>> (or of any side information) in input, then its reconstruction
of input
>>>> representation provides a target towards a representation that
is more
>>>> likely, conditioned on all the side information. A deep
auto-encoder
>>>> decoding path generalizes gradient propagation in a learned way
that can
>>>> thus handle not just infinitesimal changes but larger, discrete
changes,
>>>> hopefully allowing credit assignment through a long chain of
non-linear
>>>> operations. For this to work, each layer must be a good
denoising or
>>>> regularized auto-encoder itself. In addition to each layer being
a good
>>>> auto-encoder, the encoder also learns to please the upper layers
by
>>>> transforming the data into a space where it is easier to model
by them,
>>>> flattening manifolds and disentangling factors. The motivations
and
>>>> theoretical justifications for this approach are laid down in
this paper,
>>>> along with conjectures that will have to be verified either
mathematically
>>>> or experimentally. >>>> >>>> _______________________________________________ >>>> Lisa_labo mailing list >>>> Lisa_labo@iro.umontreal.ca >>>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo >>>> >>> >>> >>> _______________________________________________ >>> Lisa_labo mailing list >>> Lisa_labo@iro.umontreal.ca >>> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo >>> >> >> >> _______________________________________________ >> Lisa_labo mailing list >> Lisa_labo@iro.umontreal.ca >> https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo >> > > _______________________________________________ > Lisa_labo mailing list > Lisa_labo@iro.umontreal.ca > https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo >
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Lisa_labo mailing list Lisa_labo@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/lisa_labo
Afficher les réponses par date