We will cover several techniques to expose weaknesses and robustify neural network models for computer vision, from basic precautions to more advanced adversarial training.
Industrial Computer Vision systems rely on Neural Networks - also in production. If we should poke our code until it breaks, why would deep learning models get a free pass? We'll see different ways in which to poke our models and improve them, from the practitioner's point of view, who has access to the model.
Attacks keep improving and getting more sophisticated, but that doesn't mean that practitioners cannot aim at improving models with the resources they have: from very basic techniques applicable from day 1 to sophisticated adversarial training.
This is directly relevant to different domains that make use of Computer Vision solutions, from e-commerce to Healthcare or Autonomous systems.
During the talk we will see:
* ways and reasons why our (vision) models can fail (accidental or intentional),
* how to highlight weaknesses in our (vision) models,
* a range of practical techniques (from basic to more complex) to robustify our vision models through adversarial samples before letting them run in production.
some
Python Skill Level:basic
Abstract as a tweet:How much time & risk do you have? Ways to robustify your vision NN model before you let it go live.
Domains:Artificial Intelligence, Computer Vision, Deep Learning, Machine Learning