Hi all,
We have two tea-talks this week!
First, at *13:30 on Thursday, August 24, at Z-209*, we will have a presentation by Yaniv Romano, who is a PhD student at Technion, Israel.
Second, at *13:45 on Friday, August 25, at AA6214, Georgy Derevyanko*, who is currently a post-doc at Concordia and MILA, will share his research with us.
Please find more details below, and hope to see you in great numbers!
Dima
*Speaker 1:* Yaniv Romano
*Title: *A Quest for a Universal Model for Signals: From Sparsity to ConvNets
*Abstract: *The celebrated sparse representation model assumes that a signal can be represented as a linear combination of a few columns, also called atoms, taken from a matrix, termed a dictionary. When dealing with high-dimensional signals, addressing the dictionary learning problem becomes computationally infeasible due to the curse of dimensionality. Traditionally, this problem was circumvented by learning a local sparse model on small overlapping patches, extracted from the global signal, and processing (e.g. denoising) these independently. We will start this talk by proposing various approaches to bridge the gap between the efficient independent local processing and the need to model the global signal at hand. In particular, we will describe novel image restoration algorithms, leading not only to state-of-the-art results, but also to a systematic and generic way to boost the performance of many existing algorithms.
A different approach to treat high dimensional signals is the convolutional sparse coding (CSC). This global model assumes that a signal can be represented as a superposition of a few local atoms, or small filters, shifted to different positions. A recent work suggested a novel theoretical analysis of this global model, which is based on the observation that while being global, the CSC can be characterized and analyzed locally. We will extend this local-global relation by showing how one can efficiently solve the pursuit problem and train the filters involved, while operating locally on image patches.
Armed with these new insights, we proceed by proposing a multi-layer extension of this model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This, in turn, is shown to be tightly connected to Convolutional Neural Networks (CNN), so much so that the forward-pass of the CNN is in fact the Thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above scheme, we propose an alternative to the forward-pass algorithm, which is both tightly connected to deconvolutional and recurrent neural networks, and has better theoretical guarantees.
*Bio: *Yaniv Romano received his B.Sc. degree from the Department of Electrical Engineering, Technion – Israel Institute of Technology, in 2012, where he is currently pursuing his Ph.D.. He received the 2015 Zeff fellowship, the 2017 Andrew and Erna Finci Viterbi fellowship, and the 2017 Irwin and Joan Jacobs fellowship. In parallel to his studies, he has been working in the industry since 2011 as an Image Processing Algorithm Developer. The super-resolution technology he invented as an intern in Google Research was launched in 2017, leading to significant bandwidth savings of billions of images.
*Speaker 2: *Georgy Derevyanko
*Title:* Protein folding project
*Abstract:* Protein folding and structure prediction is a 50 years old problem. The solution to it will immediately change the whole industry of drug discovery and biology itself. This talk is a call for collaboration to advance the field of protein structure prediction using deep learning techniques. I will give an overview of the state-of-art algorithms in the field and their assessment procedures. Also, I will present some of our publication-ready and preliminary results. Moreover, I will present the library of differentiable transformations of a protein chemical structure and pre-processed datasets in order to lower the problem entry barrier*.*
*Bio:* -2014 PhD in "Physics for life sciences" from Universite Joseph Fourier (Grenoble, France) -2016 PostDoc in experimental structural biology at Forshungszentrum Juelich (Juelich, Germany) -present PostDoc at Concordia CERMM & MILA
Afficher les réponses par date
Just a kind reminder that Yaniv's talk is today.
Dima
On Mon, 21 Aug 2017 at 16:45 Dzmitry Bahdanau dimabgv@gmail.com wrote:
Hi all,
We have two tea-talks this week!
First, at *13:30 on Thursday, August 24, at Z-209*, we will have a presentation by Yaniv Romano, who is a PhD student at Technion, Israel.
Second, at *13:45 on Friday, August 25, at AA6214, Georgy Derevyanko*, who is currently a post-doc at Concordia and MILA, will share his research with us.
Please find more details below, and hope to see you in great numbers!
Dima
*Speaker 1:* Yaniv Romano
*Title: *A Quest for a Universal Model for Signals: From Sparsity to ConvNets
*Abstract: *The celebrated sparse representation model assumes that a signal can be represented as a linear combination of a few columns, also called atoms, taken from a matrix, termed a dictionary. When dealing with high-dimensional signals, addressing the dictionary learning problem becomes computationally infeasible due to the curse of dimensionality. Traditionally, this problem was circumvented by learning a local sparse model on small overlapping patches, extracted from the global signal, and processing (e.g. denoising) these independently. We will start this talk by proposing various approaches to bridge the gap between the efficient independent local processing and the need to model the global signal at hand. In particular, we will describe novel image restoration algorithms, leading not only to state-of-the-art results, but also to a systematic and generic way to boost the performance of many existing algorithms.
A different approach to treat high dimensional signals is the convolutional sparse coding (CSC). This global model assumes that a signal can be represented as a superposition of a few local atoms, or small filters, shifted to different positions. A recent work suggested a novel theoretical analysis of this global model, which is based on the observation that while being global, the CSC can be characterized and analyzed locally. We will extend this local-global relation by showing how one can efficiently solve the pursuit problem and train the filters involved, while operating locally on image patches.
Armed with these new insights, we proceed by proposing a multi-layer extension of this model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This, in turn, is shown to be tightly connected to Convolutional Neural Networks (CNN), so much so that the forward-pass of the CNN is in fact the Thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above scheme, we propose an alternative to the forward-pass algorithm, which is both tightly connected to deconvolutional and recurrent neural networks, and has better theoretical guarantees.
*Bio: *Yaniv Romano received his B.Sc. degree from the Department of Electrical Engineering, Technion – Israel Institute of Technology, in 2012, where he is currently pursuing his Ph.D.. He received the 2015 Zeff fellowship, the 2017 Andrew and Erna Finci Viterbi fellowship, and the 2017 Irwin and Joan Jacobs fellowship. In parallel to his studies, he has been working in the industry since 2011 as an Image Processing Algorithm Developer. The super-resolution technology he invented as an intern in Google Research was launched in 2017, leading to significant bandwidth savings of billions of images.
*Speaker 2: *Georgy Derevyanko
*Title:* Protein folding project
*Abstract:* Protein folding and structure prediction is a 50 years old problem. The solution to it will immediately change the whole industry of drug discovery and biology itself. This talk is a call for collaboration to advance the field of protein structure prediction using deep learning techniques. I will give an overview of the state-of-art algorithms in the field and their assessment procedures. Also, I will present some of our publication-ready and preliminary results. Moreover, I will present the library of differentiable transformations of a protein chemical structure and pre-processed datasets in order to lower the problem entry barrier*.*
*Bio:* -2014 PhD in "Physics for life sciences" from Universite Joseph Fourier (Grenoble, France) -2016 PostDoc in experimental structural biology at Forshungszentrum Juelich (Juelich, Germany) -present PostDoc at Concordia CERMM & MILA
Hi everyone,
We have a talk in 15 minutes at Z209.
On Thu, Aug 24, 2017 at 10:19 AM Dzmitry Bahdanau dimabgv@gmail.com wrote:
Just a kind reminder that Yaniv's talk is today.
Dima
On Mon, 21 Aug 2017 at 16:45 Dzmitry Bahdanau dimabgv@gmail.com wrote:
Hi all,
We have two tea-talks this week!
First, at *13:30 on Thursday, August 24, at Z-209*, we will have a presentation by Yaniv Romano, who is a PhD student at Technion, Israel.
Second, at *13:45 on Friday, August 25, at AA6214, Georgy Derevyanko*, who is currently a post-doc at Concordia and MILA, will share his research with us.
Please find more details below, and hope to see you in great numbers!
Dima
*Speaker 1:* Yaniv Romano
*Title: *A Quest for a Universal Model for Signals: From Sparsity to ConvNets
*Abstract: *The celebrated sparse representation model assumes that a signal can be represented as a linear combination of a few columns, also called atoms, taken from a matrix, termed a dictionary. When dealing with high-dimensional signals, addressing the dictionary learning problem becomes computationally infeasible due to the curse of dimensionality. Traditionally, this problem was circumvented by learning a local sparse model on small overlapping patches, extracted from the global signal, and processing (e.g. denoising) these independently. We will start this talk by proposing various approaches to bridge the gap between the efficient independent local processing and the need to model the global signal at hand. In particular, we will describe novel image restoration algorithms, leading not only to state-of-the-art results, but also to a systematic and generic way to boost the performance of many existing algorithms.
A different approach to treat high dimensional signals is the convolutional sparse coding (CSC). This global model assumes that a signal can be represented as a superposition of a few local atoms, or small filters, shifted to different positions. A recent work suggested a novel theoretical analysis of this global model, which is based on the observation that while being global, the CSC can be characterized and analyzed locally. We will extend this local-global relation by showing how one can efficiently solve the pursuit problem and train the filters involved, while operating locally on image patches.
Armed with these new insights, we proceed by proposing a multi-layer extension of this model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This, in turn, is shown to be tightly connected to Convolutional Neural Networks (CNN), so much so that the forward-pass of the CNN is in fact the Thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above scheme, we propose an alternative to the forward-pass algorithm, which is both tightly connected to deconvolutional and recurrent neural networks, and has better theoretical guarantees.
*Bio: *Yaniv Romano received his B.Sc. degree from the Department of Electrical Engineering, Technion – Israel Institute of Technology, in 2012, where he is currently pursuing his Ph.D.. He received the 2015 Zeff fellowship, the 2017 Andrew and Erna Finci Viterbi fellowship, and the 2017 Irwin and Joan Jacobs fellowship. In parallel to his studies, he has been working in the industry since 2011 as an Image Processing Algorithm Developer. The super-resolution technology he invented as an intern in Google Research was launched in 2017, leading to significant bandwidth savings of billions of images.
*Speaker 2: *Georgy Derevyanko
*Title:* Protein folding project
*Abstract:* Protein folding and structure prediction is a 50 years old problem. The solution to it will immediately change the whole industry of drug discovery and biology itself. This talk is a call for collaboration to advance the field of protein structure prediction using deep learning techniques. I will give an overview of the state-of-art algorithms in the field and their assessment procedures. Also, I will present some of our publication-ready and preliminary results. Moreover, I will present the library of differentiable transformations of a protein chemical structure and pre-processed datasets in order to lower the problem entry barrier*.*
*Bio:* -2014 PhD in "Physics for life sciences" from Universite Joseph Fourier (Grenoble, France) -2016 PostDoc in experimental structural biology at Forshungszentrum Juelich (Juelich, Germany) -present PostDoc at Concordia CERMM & MILA
--
You received this message because you are subscribed to the Google Groups "Tea Talks MILA" group. To unsubscribe from this group and stop receiving emails from it, send an email to teatalk-orgs+unsubscribe@lisa.iro.umontreal.ca. To post to this group, send email to teatalk-orgs@lisa.iro.umontreal.ca. To view this discussion on the web visit https://groups.google.com/a/lisa.iro.umontreal.ca/d/msgid/teatalk-orgs/CAL_Q... https://groups.google.com/a/lisa.iro.umontreal.ca/d/msgid/teatalk-orgs/CAL_QA9mFhmnFwA7ZbUCds9CdW%3DJDO5sNB9Swh6y-vCkp2emQzg%40mail.gmail.com?utm_medium=email&utm_source=footer .
Just a kind reminder that we have another tea-talk today by Georgy Derevyanko.
Dima
On Mon, 21 Aug 2017 at 16:45 Dzmitry Bahdanau dimabgv@gmail.com wrote:
Hi all,
We have two tea-talks this week!
First, at *13:30 on Thursday, August 24, at Z-209*, we will have a presentation by Yaniv Romano, who is a PhD student at Technion, Israel.
Second, at *13:45 on Friday, August 25, at AA6214, Georgy Derevyanko*, who is currently a post-doc at Concordia and MILA, will share his research with us.
Please find more details below, and hope to see you in great numbers!
Dima
*Speaker 1:* Yaniv Romano
*Title: *A Quest for a Universal Model for Signals: From Sparsity to ConvNets
*Abstract: *The celebrated sparse representation model assumes that a signal can be represented as a linear combination of a few columns, also called atoms, taken from a matrix, termed a dictionary. When dealing with high-dimensional signals, addressing the dictionary learning problem becomes computationally infeasible due to the curse of dimensionality. Traditionally, this problem was circumvented by learning a local sparse model on small overlapping patches, extracted from the global signal, and processing (e.g. denoising) these independently. We will start this talk by proposing various approaches to bridge the gap between the efficient independent local processing and the need to model the global signal at hand. In particular, we will describe novel image restoration algorithms, leading not only to state-of-the-art results, but also to a systematic and generic way to boost the performance of many existing algorithms.
A different approach to treat high dimensional signals is the convolutional sparse coding (CSC). This global model assumes that a signal can be represented as a superposition of a few local atoms, or small filters, shifted to different positions. A recent work suggested a novel theoretical analysis of this global model, which is based on the observation that while being global, the CSC can be characterized and analyzed locally. We will extend this local-global relation by showing how one can efficiently solve the pursuit problem and train the filters involved, while operating locally on image patches.
Armed with these new insights, we proceed by proposing a multi-layer extension of this model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This, in turn, is shown to be tightly connected to Convolutional Neural Networks (CNN), so much so that the forward-pass of the CNN is in fact the Thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above scheme, we propose an alternative to the forward-pass algorithm, which is both tightly connected to deconvolutional and recurrent neural networks, and has better theoretical guarantees.
*Bio: *Yaniv Romano received his B.Sc. degree from the Department of Electrical Engineering, Technion – Israel Institute of Technology, in 2012, where he is currently pursuing his Ph.D.. He received the 2015 Zeff fellowship, the 2017 Andrew and Erna Finci Viterbi fellowship, and the 2017 Irwin and Joan Jacobs fellowship. In parallel to his studies, he has been working in the industry since 2011 as an Image Processing Algorithm Developer. The super-resolution technology he invented as an intern in Google Research was launched in 2017, leading to significant bandwidth savings of billions of images.
*Speaker 2: *Georgy Derevyanko
*Title:* Protein folding project
*Abstract:* Protein folding and structure prediction is a 50 years old problem. The solution to it will immediately change the whole industry of drug discovery and biology itself. This talk is a call for collaboration to advance the field of protein structure prediction using deep learning techniques. I will give an overview of the state-of-art algorithms in the field and their assessment procedures. Also, I will present some of our publication-ready and preliminary results. Moreover, I will present the library of differentiable transformations of a protein chemical structure and pre-processed datasets in order to lower the problem entry barrier*.*
*Bio:* -2014 PhD in "Physics for life sciences" from Universite Joseph Fourier (Grenoble, France) -2016 PostDoc in experimental structural biology at Forshungszentrum Juelich (Juelich, Germany) -present PostDoc at Concordia CERMM & MILA
We are starting in 3 minutes!
On Fri, 25 Aug 2017 at 08:00 Dzmitry Bahdanau dimabgv@gmail.com wrote:
Just a kind reminder that we have another tea-talk today by Georgy Derevyanko.
Dima
On Mon, 21 Aug 2017 at 16:45 Dzmitry Bahdanau dimabgv@gmail.com wrote:
Hi all,
We have two tea-talks this week!
First, at *13:30 on Thursday, August 24, at Z-209*, we will have a presentation by Yaniv Romano, who is a PhD student at Technion, Israel.
Second, at *13:45 on Friday, August 25, at AA6214, Georgy Derevyanko*, who is currently a post-doc at Concordia and MILA, will share his research with us.
Please find more details below, and hope to see you in great numbers!
Dima
*Speaker 1:* Yaniv Romano
*Title: *A Quest for a Universal Model for Signals: From Sparsity to ConvNets
*Abstract: *The celebrated sparse representation model assumes that a signal can be represented as a linear combination of a few columns, also called atoms, taken from a matrix, termed a dictionary. When dealing with high-dimensional signals, addressing the dictionary learning problem becomes computationally infeasible due to the curse of dimensionality. Traditionally, this problem was circumvented by learning a local sparse model on small overlapping patches, extracted from the global signal, and processing (e.g. denoising) these independently. We will start this talk by proposing various approaches to bridge the gap between the efficient independent local processing and the need to model the global signal at hand. In particular, we will describe novel image restoration algorithms, leading not only to state-of-the-art results, but also to a systematic and generic way to boost the performance of many existing algorithms.
A different approach to treat high dimensional signals is the convolutional sparse coding (CSC). This global model assumes that a signal can be represented as a superposition of a few local atoms, or small filters, shifted to different positions. A recent work suggested a novel theoretical analysis of this global model, which is based on the observation that while being global, the CSC can be characterized and analyzed locally. We will extend this local-global relation by showing how one can efficiently solve the pursuit problem and train the filters involved, while operating locally on image patches.
Armed with these new insights, we proceed by proposing a multi-layer extension of this model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This, in turn, is shown to be tightly connected to Convolutional Neural Networks (CNN), so much so that the forward-pass of the CNN is in fact the Thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above scheme, we propose an alternative to the forward-pass algorithm, which is both tightly connected to deconvolutional and recurrent neural networks, and has better theoretical guarantees.
*Bio: *Yaniv Romano received his B.Sc. degree from the Department of Electrical Engineering, Technion – Israel Institute of Technology, in 2012, where he is currently pursuing his Ph.D.. He received the 2015 Zeff fellowship, the 2017 Andrew and Erna Finci Viterbi fellowship, and the 2017 Irwin and Joan Jacobs fellowship. In parallel to his studies, he has been working in the industry since 2011 as an Image Processing Algorithm Developer. The super-resolution technology he invented as an intern in Google Research was launched in 2017, leading to significant bandwidth savings of billions of images.
*Speaker 2: *Georgy Derevyanko
*Title:* Protein folding project
*Abstract:* Protein folding and structure prediction is a 50 years old problem. The solution to it will immediately change the whole industry of drug discovery and biology itself. This talk is a call for collaboration to advance the field of protein structure prediction using deep learning techniques. I will give an overview of the state-of-art algorithms in the field and their assessment procedures. Also, I will present some of our publication-ready and preliminary results. Moreover, I will present the library of differentiable transformations of a protein chemical structure and pre-processed datasets in order to lower the problem entry barrier*.*
*Bio:* -2014 PhD in "Physics for life sciences" from Universite Joseph Fourier (Grenoble, France) -2016 PostDoc in experimental structural biology at Forshungszentrum Juelich (Juelich, Germany) -present PostDoc at Concordia CERMM & MILA
lisa_seminaires@iro.umontreal.ca