JuliaCon 2022 (Times are UTC)

Bender.jl: A utility package for customizable deep learning
2022-07-28 , Blue

A wide range of research on feedforward neural networks requires "bending" the chain rule during backpropagation. The package Bender.jl provides neural network layers (compatible with Flux.jl), which gives users more freedom to choose every aspect of the forward mapping. This makes it easy to leverage ChainRules.jl to compose a wide range of experiments, such as training binary neural networks, Feedback Alignment and Direct Feedback Alignment in just a few lines of code.


In this lightning talk we will explore two different use cases of Bender.jl, namely training binary neural networks and training neural networks using the biologically motivated Feedback Alignment algorithm. Binary neural networks and feedback alignment might seem like very different areas of research, but from an implementation point of view they are very similar, as both amount to modifying the chain rule during backpropagation. Implementing a binary neural network requires modifying backpropagation in order to allow non-zero error signals to propagate through binary activation functions and feedback alignment requires modifying backpropagation to use a set of auxilary weights for transporting errors backwards (in order to avoid the biologically implausible weight symmetry requirement inherent to backpropagation). By allowing the user to specify the exact nature of the forward mapping when initializing a layer it is possible to leverage ChainRules.jl to easily implement these and similar experiments.

I am a PhD student at Chalmers University of Technology. My research interests are biologically motivated learning algorithms and energy based models.