Beat Buesser
I'm a Research Staff Member at IBM Research and my current work focuses on adversarial machine learning and the security of artificial intelligence.
Session
09-16
15:30
30min
Adversarial Robustness Toolbox: How to attack and defend your machine learning models
Beat Buesser
Adversarial samples and poisoning attacks are emerging threats to the security of AI systems. This talk demonstrates how to apply the Python library Adversarial Robustness Toolbox (ART) to create and deploy robust AI systems.
Ferrier Hall