Professor Mike Deweese of Berkeley presents his research on neural attention mechanisms.
Having to focus on one voice in a crowded room of boisterous speakers is a common experience for most of us, and we humans are extremely good at it, yet the latest algorithms running on the fastest modern computers fail miserably at isolating a single voice from a noisy background in all but the simplest cases. This demonstrates that attending to desired sounds in our everyday environment poses a surprisingly challenging computational problem for the brain—a problem whose solution would provide insight into the workings of the conscious mind and new approaches for designing man-made machines capable of intelligently processing real-world data.
The impetus to understand how auditory attention is controlled by the brain is heightened by the fact that several prevalent mental disorders including autism, attention deficit hyperactivity disorder (ADHD), and schizophrenia are characterized in part by an inability to focus attention on important sounds in the presence of distractors. Inroads toward cures for these diseases would be enormously beneficial to those afflicted and to society as a whole.
Details: Neural Mechanisms of Selective Auditory Attention.
Animals can selectively respond to a target sound in the presence of simultaneous distractors, similar to the way in which humans can respond to one person’s voice at a cocktail party. To investigate the underlying neural mechanisms, we recorded single-unit activity in primary auditory cortex (A1) and medial prefrontal cortex (mPFC) of rats selectively responding to a target sound from a mixture. We found that pre-stimulus activity in mPFC encoded the selection rule — the sound to which the rat would respond. Moreover, electrically disrupting activity in mPFC significantly impaired performance. Surprisingly, pre-stimulus and stimulus-evoked activity in A1 also encoded the selection rule, a cognitive variable typically considered the domain of prefrontal regions. However, stimulus tuning was not strongly affected. We suggest a model in which activation of a specific network of neurons underlies the selection of an imminent sound from a mixture, giving rise to robust and widespread rule encoding in both brain regions. Time permitting, I will also briefly describe some of our other experimental efforts, including a rodent behavioral paradigm for studying working memory, as well as some of our theoretical work on the statistics of natural scenes and sounds and biologically plausible rules for learning sparse representations in sensory cortex.