[Lisa_seminaires] Fwd: Colloque du DIRO, jeudi 23 fév. 15h30

Yoshua Bengio yoshua.umontreal at gmail.com
Lun 20 Fév 21:03:01 EST 2017


Last ML faculty job talk of the season. Please don't miss it.

Also, I would appreciate any feedback on the 6 speakers which we will have
gone through
(every Monday and Thursday for the last 3 weeks, including this one).
One of these could become your collaborator or co-supervisor ;-)

---------- Forwarded message ----------
From: Pierre Lecuyer <lecuyer at iro.umontreal.ca>
Date: 2017-02-20 17:48 GMT-05:00
Subject: Colloque du DIRO, jeudi 23 fév. 15h30
To: seminaires <seminaires at iro.umontreal.ca>
Cc: gerad <gerad at crt.umontreal.ca>



*Principled Tuning for Large-Scale ML Systems*

par

*Ioannis Mitliagkas*

 Statistics and Computer Science Departments
Stanford University

*Jeudi 23 février, 15:30-16:30*, *Salle 6214*

Pavillon André-Aisenstadt, Université de Montréal, 2920 Chemin de la Tour


Café de 15:00 à 15:30

*Cette conférence sera donnée en anglais.*

*Résumé / Abstract:*

Modern machine learning systems rely on complex and distributed pipelines
that require extensive hyperparameter tuning to achieve the desired
performance. Careful tuning can result in significant speedups and
improvements in solution quality. However, the dimensionality of the
hyperparameter space often makes the use of brute-force search prohibitive.
To make things worse, components can interact in unexpected ways and make
joint tuning necessary. These challenges preclude non-experts from fully
utilizing the potential of modern machine learning tools and call for a
deeper understanding of the effect hyperparameters have on the quality and
performance of a system.

In this talk, I will discuss examples of tuning large-scale learning and
inference systems. I will focus on recent work that reveals a previously
unknown interaction between system and algorithm dynamics when running an
asynchronous learning system. Asynchronous methods are widely used for
their superior throughput, but have limited theoretical justification when
applied to non-convex problems. I will show that running stochastic
gradient descent (SGD) in an asynchronous manner can be viewed as adding a
momentum-like term to the SGD iteration. This result does not assume
convexity of the objective function, so is applicable to deep learning
systems. Furthermore, using a hybrid parallel architecture we can control
the level of asynchrony, a new hyperparameter. Theory then implies that
 jointly tuning momentum and the level of asynchrony can significantly
reduce the number of iterations, necessary for an asynchronous system to
achieve the same solution. This line of work provides a number of necessary
components for realizing the vision of an automated machine learning
pipeline.
*Courte biographie / Short Bio:*

Ioannis Mitliagkas is a Postdoctoral Scholar with the departments of
Statistics and Computer Science at Stanford university. He obtained his
Ph.D. from the department of Electrical and Computer Engineering at The
University of Texas at Austin. His research focuses on understanding and
optimizing the scan order for Gibbs sampling, as well as understanding the
interaction between optimization and the dynamics of large-scale learning
systems. In the past he has worked on high-dimensional streaming problems
and fast algorithms and computation for large graph problems.

-- 
Pierre L'Ecuyer, Professeur Titulaire
Chaire du Canada en Simulation et Optimisation Stochastique
CIRRELT, GERAD, and DIRO, Université de Montréal,
Canadahttp://www.iro.umontreal.ca/~lecuyer
-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: http://webmail.iro.umontreal.ca/pipermail/lisa_seminaires/attachments/20170220/201ba420/attachment.html 


Plus d'informations sur la liste de diffusion Lisa_seminaires