PyCon UK 2019

Plug & train: flexible customisation and extension of python's deep learning frameworks
2019-09-15 , Room J

Python is growing quickly partly due to its popularity in the data science and machine learning community. Python's flexibility is well suited to new statistical frameworks such as deep learning which are successful because they are modular. This is the story of the success of python in deep learning.


The rapid growth of python and the growth of deep learning coincide
almost perfectly. The flexibility of python has made it the language of choice
for machine learning in general, and deep learning in particular. Tracking the
success of deep learning is a great way to track the success of python.

Deep learning, or the training of deep neural networks to analyse data and make
predictions, is an inherently flexible framework. Deep neural networks can be
built to accept data of almost arbitrary shapes and sizes. Users achieve this by
combining modules that perform different tasks in a plug-and-play
fashion: as long as they are differentiable, they can be combined.

In python, frameworks such as Tensorflow, Keras, and Pytorch have made
it easy to construct and customise those building blocks. These frameworks
are inherently recursive: a model is the same type of object as its constituent
parts, and in turn can be used to construct larger models.

This means that these frameworks can take advantage of python's flexibility.
When subclassing neural networks, users are able to define hooks that alter
either what the model does with the data (the "forward pass"), or how it changes
in response to feedback during training (the "backward pass").

One demonstration of this flexibility is a gradient reversal layer. This
allows you to train a model to do the opposite of what it would usually do:
instead of becoming good at a computer vision task, it becomes bad. What sounds
like a fundamental change in behaviour is in fact something we can
demonstrate very easily in the pytorch framework.

Integrating multiple of these models in a flexible fashion is similarly easy.
Models in deep learning frameworks are just combinations of smaller models, and
it is easy to use a model that was trained for image classification and use it
as the first step in a larger model that actually locates objects in images.
Thanks to python's overloading functionality, we'll demonstrate that it is easy
to convert a model that produces one output into a model that produces many
intermediate outputs, as is common in Feature Pyramids in computer vision.

This flexibility does not come at the expense of stability in production.
Frameworks like tensorflow allow for fixed and stable network serialisation that
can be used continuously and with great success in large corporations with
distributed infrastructure.


Is your proposal suitable for beginners? – yes

I have a background in neuroscience, studying vision in autism. I now work as a research engineer in machine learning. My work focuses on applied AI for human rights monitoring and accountability. I am based in the AI for Good team at Element AI. I also research active learning, and work on interactive computing in the jupyter environment.

Data Scientist in industry and former motor neuroscientist