Skip to content

Lateral connections in the neocortex

Photo of Subutai Ahmad
Hosted By
Subutai A. and Lucas S.
Lateral connections in the neocortex

Details

Brains@Bay Meetups focus on how neuroscience can inspire us to create improved artificial intelligence and machine learning algorithms. In this special edition we focus on the function of lateral connections (connections between neurons within a level). Long-range lateral connections are ubiquitous in the neocortex and cannot be explained by pure feedforward models. We have invited researchers from the Allen Institute for Brain Science to discuss their recently published paper on modeling these connections.

Our speakers will be Stefan Mihalas, Ramakrishnan Iyer, and Brian Hu. Their paper presents a network model of cortical computation in which the lateral connections from surrounding neurons enable each neuron to integrate contextual information from features in the surround. They show that adding these connections to deep convolutional networks in an unsupervised manner makes them more robust to noise in the input image and leads to better classification accuracy under noise.

Contextual Integration in Cortical and Convolutional Neural Networks

Ramakrishnan Iyer, Brian Hu and Stefan Mihalas
Allen Institute for Brain Science

Sensory systems of biological organisms need to adapt both to the statistics of their environment as well as the tasks they need to perform. These problems are usually treated separately in computational neuroscience. In vision, neuronal circuits and responses in early sensory areas are thought to be adapted to input (natural scene) statistics while in higher areas they are defined by the tasks (e.g. object recognition). Can we combine these two aspects together?

We start with the hypothesis that there are two types of connections: one which is driven by the task, and one which is driven by statistics of stimuli. We consider a simple model where the task-driven connections are between layers and the lateral connections enable individual neurons to integrate contextual information from surround neurons.

We show that it is possible to compute lateral connections using a modified Hebbian learning rule under a set of assumptions. These lateral connections have good agreement with experimental data, and their physiological effect matches observations of extraclassical receptive fields. Incorporating such lateral connections into convolutional neural networks makes them more robust to noise and leads to better performance on noisy versions of the MNIST dataset. Decomposing the predicted lateral connectivity matrices into low-rank and sparse components introduces additional cell types into these networks and enables exploring effects of cell-type specific perturbations on network computation.

Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding computations implemented by inter-layer connections when the input is unreliable.

Link to paper: https://www.frontiersin.org/articles/10.3389/fncom.2020.00031/full

Photo of Brains@Bay group
Brains@Bay
See more events
Online event
This event has passed