Our goal is to create a simple yet interactive showcase for computer vision using a Python notebook. In a trade fair setup, we want to learn new object classes quickly using very few training examples. Thus, we rely on pretrained neural networks for
If you want to showcase novel technologies, it is best to have interactive demonstrations to explore their characteristics in a playful environment. In this session we will build such a demonstrator for machine-learning-based computer vision in a Jupyter notebook. The goal is to be able to learn new image classes quickly and with minimal training examples for the sake of demonstrating the technology in situations like trade fairs or conference exhibitions. This will not achieve production ready results but is a compact and viable example.
The obstacles that we will face and overcome are:
- How to get the image data from the webcam over the browser to the python kernel? (Spoiler: ipywebrtc)
- How to extract meaningful image features using pretrained networks? (Spoiler: keras)
- How to glue everything together to have a live camera view with classification? (Spoiler: ipywidgets and callbacks)
What we will not be covering:
- Performance, stability and scalability
- GPUs for neural networks
#transferlearning, #keras, #WebRTC, #python
Build a ML showcase using #transferlearning, #keras, #WebRTC, #python
Dr. Harald Bosch is a senior consultant at Novatec Consulting GmbH where he is coordinating the topic machine learning and artificial intelligence. Prior to this, he was a post doc researcher at the university of Stuttgart where he developed visual analytics solutions for complex domain problems, bringing together information visualization, human computer interaction and data mining.