Reminder: this is happening today!

Please put your name in the spreadsheet if you want to speak with Andreas after the talk.

Dima

On Fri, 17 Mar 2017 at 15:11 Dzmitry Bahdanau <dimabgv@gmail.com> wrote:
Hi all,

Our next tea-talk will be given by Dr. Andreas Moshovos on March 24, at 13:45 (new time!), room AA6214, who is a professor at University of Toronto. Hope you see many of you there! Please let me know if you would like to speak to Andreas after the talk. You can directly put your name on the spreadsheet below:


Title: Exploiting Value Content to Accelerate Inference with Convolutional Neural Networks

Abstract: Sufficiently capable computing hardware is essential for practical applications of Deep Learning. Until very recently computing hardware capabilities have been increasing at an exponential rate. As a result, around 2010 computing hardware capabilities reached the level necessary to demonstrate Deep Learning’s true potential. Unfortunately, semiconductor technology scaling, the keep enabler of this past exponential growth in capabilities, has slowed down dramatically. Fortunately, specialized computing hardware design has the potential to deliver another 2 to 3 orders of improvements in computing capabilities.

Our goal is to develop the techniques necessary for boosting computing hardware capabilities thus enabling further innovation in Deep Learning.  We are developing specialized computing hardware for Deep Learning Networks whose key feature is that they are value-based. We have been developing value-based accelerators that take advantage of expected properties in the runtime calculated value stream of Deep Learning Networks such as the value distribution of activations, or even their bit content. Using image classification convolutional neural networks, we have demonstrated 2 to 3 orders of magnitude execution time improvements over conventional graphics processor hardware and up to 4.5x improvements over a state-of-the-art accelerator. In this talk we will review the need for specialized computing hardware for Deep Learning and summarize our efforts. We will also briefly touch upon the recently approved NSERC COHESA Strategic Partnership Network on Hardware Acceleration for Machine Learning. NSERC COHESA brings together 19 Researchers across multiple Canadian Universities and 8 Industrial Partners.


Bio: Andreas Moshovos teaches how to design and optimize computing hardware engines at the University of Toronto where he has the privilege of collaborating with several talented students on techniques to improve execution time, energy efficiency and cost for computing hardware. He has also taught at Northwestern University, USA, the University of Athens, Greece, the Hellenic Open University,  Greece, and as an invited professor at the École Polytechnique Fédérale de Lausanne, Switzerland. He has received the ACM SIGARCH Maurice Wilkes award in 2010, an NSF CAREER Award in 2000, two IBM Faculty awards, a Semiconductor Research Corporation Inventor recognition award, and a MICRO Hall of Fame award. He has served at the Program Chair for the ACM/IEEE International Symposium on Microarchiteture and the IEEE International Symposium on the Performance Analysis of Systems and Software. He studied computer science at the University of Crete, Greece, at New York University, USA, and at the University of Wisconsin-Madison, USA.

Best,
Dima