This week we have Jacob Steinhardt from Stanford giving a special DIRO talk on Fri Feb 16 at 10:30AM in room AA1360.
Don't be fooled, this talk should be the real deal!
Michael
TITLE Provably Secure Machine Learning
KEYWORDS Security, Optimization, AI Safety, Adversarial
ABSTRACTThe widespread use of machine learning systems creates a new class of
computer security vulnerabilities where, rather than attacking the
integrity of the software itself, malicious actors exploit the
statistical nature of the learning algorithms. For instance, attackers
can add fake data (e.g. by creating fake user accounts), or
strategically manipulate inputs to the system once it is deployed. So
far, attempts to defend against these attacks have focused on empirical
performance against known sets of attacks. I will argue that this is a
fundamentally inadequate paradigm for achieving meaningful security
guarantees. Instead, we need algorithms that are provably secure by
design, in line with best practices for traditional computer security.
To achieve this goal, we take inspiration from robust statistics and
robust optimization, but with an eye towards the security requirements
of modern machine learning systems. Motivated by the trend towards
models with thousands or millions of features, we investigate the
robustness of learning algorithms in high dimensions. We show that most
algorithms are brittle to even small fractions of adversarial data, and
then develop new algorithms that are provably robust. Additionally, to
accommodate the increasing use of deep learning, we develop an algorithm
for certifiably robust optimization of non-convex models such as neural
networks.