This week we have *Jacob Steinhardt* from Stanford giving a special DIRO talk on *Fri Feb 16* at* 10:30AM* in room *AA1360*.
Don't be fooled, this talk should be the real deal! Michael
*TITLE *Provably Secure Machine Learning
*KEYWORDS *Security, Optimization, AI Safety, Adversarial
*ABSTRACT* The widespread use of machine learning systems creates a new class of computer security vulnerabilities where, rather than attacking the integrity of the software itself, malicious actors exploit the statistical nature of the learning algorithms. For instance, attackers can add fake data (e.g. by creating fake user accounts), or strategically manipulate inputs to the system once it is deployed. So far, attempts to defend against these attacks have focused on empirical performance against known sets of attacks. I will argue that this is a fundamentally inadequate paradigm for achieving meaningful security guarantees. Instead, we need algorithms that are provably secure by design, in line with best practices for traditional computer security. To achieve this goal, we take inspiration from robust statistics and robust optimization, but with an eye towards the security requirements of modern machine learning systems. Motivated by the trend towards models with thousands or millions of features, we investigate the robustness of learning algorithms in high dimensions. We show that most algorithms are brittle to even small fractions of adversarial data, and then develop new algorithms that are provably robust. Additionally, to accommodate the increasing use of deep learning, we develop an algorithm for certifiably robust optimization of non-convex models such as neural networks.
*BIO*Jacob Steinhardt is a graduate student in artificial intelligence at Stanford University working with Percy Liang. His main research interest is in designing machine learning algorithms with the reliability properties of good software. So far this has led to the study of provably secure machine learning systems, as well as the design of learning algorithms that can detect their own failures and generalize predictably in new situations. Outside of research, Jacob is a technical advisor to the Open Philanthropy Project, and mentors gifted high school students through the USACO and SPARC summer programs.
Afficher les réponses par date
Reminder, this is in 10 minutes!
On Mon, Feb 12, 2018 at 8:52 PM Michael Noukhovitch mnoukhov@gmail.com wrote:
This week we have *Jacob Steinhardt* from Stanford giving a special DIRO talk on *Fri Feb 16* at* 10:30AM* in room *AA1360*.
Don't be fooled, this talk should be the real deal! Michael
*TITLE *Provably Secure Machine Learning
*KEYWORDS *Security, Optimization, AI Safety, Adversarial
*ABSTRACT* The widespread use of machine learning systems creates a new class of computer security vulnerabilities where, rather than attacking the integrity of the software itself, malicious actors exploit the statistical nature of the learning algorithms. For instance, attackers can add fake data (e.g. by creating fake user accounts), or strategically manipulate inputs to the system once it is deployed. So far, attempts to defend against these attacks have focused on empirical performance against known sets of attacks. I will argue that this is a fundamentally inadequate paradigm for achieving meaningful security guarantees. Instead, we need algorithms that are provably secure by design, in line with best practices for traditional computer security. To achieve this goal, we take inspiration from robust statistics and robust optimization, but with an eye towards the security requirements of modern machine learning systems. Motivated by the trend towards models with thousands or millions of features, we investigate the robustness of learning algorithms in high dimensions. We show that most algorithms are brittle to even small fractions of adversarial data, and then develop new algorithms that are provably robust. Additionally, to accommodate the increasing use of deep learning, we develop an algorithm for certifiably robust optimization of non-convex models such as neural networks.
*BIO*Jacob Steinhardt is a graduate student in artificial intelligence at Stanford University working with Percy Liang. His main research interest is in designing machine learning algorithms with the reliability properties of good software. So far this has led to the study of provably secure machine learning systems, as well as the design of learning algorithms that can detect their own failures and generalize predictably in new situations. Outside of research, Jacob is a technical advisor to the Open Philanthropy Project, and mentors gifted high school students through the USACO and SPARC summer programs.
lisa_seminaires@iro.umontreal.ca