PyCon UK 2019

Adversarial Robustness Toolbox: How to attack and defend your machine learning models
2019-09-16 , Ferrier Hall

Adversarial samples and poisoning attacks are emerging threats to the security of AI systems. This talk demonstrates how to apply the Python library Adversarial Robustness Toolbox (ART) to create and deploy robust AI systems.


The Adversarial Robustness Toolbox (ART) is an open source Python library released under MIT license providing state-of-the-art adversarial attacks and defenses for classifiers of many of the popular Python deep learning and machine learning projects (TensorFlow, Keras, PyTorch, MXNet and soon scikit-learn). ART enables researchers and developers to easily run large-scale experiments for benchmarking novel attacks or defenses and build comprehensive defenses for real-world machine learning applications. ART has a focus on adversarial robustness of visual recognition systems, but current development is under way to enable other data types such as speech, text and time series data.

This talk will provide a short overview of ART’s architecture and introduce the library modules for classifiers, evasion attacks, evasion defenses, detection of evasion attacks and data poisoning. The main part of this talk will demonstrate multiple short tutorials of real applications of ART for adversarial attacks and defenses on machine learning models supported by code examples and the necessary intuitive mathematical background to understand these attacks and defenses.

This talk is useful for anybody interested in machine learning, deep learning, artificial intelligence and/or security. In this talk we try to increase the audience’s understanding of adversarial machine learning and the awareness of the importance of security for AI systems. Attending this talk should enable the audience to use ART’s interfaces and quickly get started with composing comprehensive defense systems for machine learning and AI systems and applying the necessary attacks to test such systems.

GitHub: https://github.com/IBM/adversarial-robustness-toolbox
Documentation: http://adversarial-robustness-toolbox.readthedocs.io


Is your proposal suitable for beginners?: maybe

I'm a Research Staff Member at IBM Research and my current work focuses on adversarial machine learning and the security of artificial intelligence.