Alternatives to Backpropagation in Neural Networks


Details
Hi, and welcome back to Brains@Bay!
This time we will be discussing alternatives to backpropagation in neural networks.
From the neuroscience side, we will host Prof. Rafal Bogacz, from University of Oxford, discussing the viability of backpropagation in the brain, and the relationship of predictive coding networks and backpropagation. Prof. Rafal has published extensively in the field and co-authored a comprehensive review paper on Theories of Error Backpropagation in the Brain.
Sindy Löwe from University of Amsterdam will discuss her latest research on self-supervised representation learning. She is the first author of the paper Putting An End to End-to-End: Gradient-Isolated Learning of Representations, presented at last year's Neurips, that shows networks can learn by optimizing the mutual information between representations at each layer of a model in isolation.
Jack Kendall, co-founder of RAIN Neuromorphics, will show how equilibrium propagation can be used to train end-to-end analog networks, which can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
The talks will be followed by a discussion panel where will juxtapose different points of view and address your main questions. We look forward to seeing you there!
----
The link can be found under the Online Event tab, and will connect directly to the webinar at the date and time proposed (Nov 18, 9:30 a.m. Pacific Time). If you are not using the Zoom app or not signed in at the time, it will ask only for name and email to connect.
----
Detailed description of the talks:
Rafal Bogacz
Predictive coding: an alternative to backpropagation for biological neural networks?
Predictive coding is an established model of information processing in cortical circuits, which was originally developed to describe unsupervised learning in visual cortex (Rao & Ballard, 1999). More recently, it has been demonstrated that when these networks are used for supervised learning, they approximate backpropagation. Remarkably, efficient learning in these networks can be achieved employing only local Hebbian plasticity. This talk will give an overview predictive coding networks, explain how they approximate backpropagation, and highlight how they differ from artificial neural networks.
Sindy Löwe
Putting An End to End-to-End
Modern deep learning models are typically optimized using end-to-end backpropagation and a global, supervised loss function. Although empirically proven to be highly successful, this approach is considered biologically implausible: the brain does not optimize a global objective by backpropagating error signals. Instead, it is highly modular and learns predominantly based on local information. In this talk, I will present Greedy Infomax - a local self-supervised representation learning approach that is inspired by this local learning in the brain. I will demonstrate how Greedy InfoMax enables us to train a neural network without labels and without end-to-end backpropagation, while achieving highly competitive results on downstream classification tasks. Last but not least, I will outline how this local learning allows us to asynchronously optimize individual subparts of a neural network and how to distribute this optimization across devices.

Alternatives to Backpropagation in Neural Networks