ABSTRACT: The brain empowers humans and other animals with remarkable abilities to
sense and perceive their acoustic environment in highly degraded
conditions. These seemingly trivial tasks for humans have proven extremely
difficult to model and implement in machines. One crucial limiting factor
has been the need for a deep interaction between two very different
disciplines, that of neuroscience and computer engineering. In this talk,
I will present results of an interdisciplinary research effort to address
the following fundamental
questions: 1) what computation is performed in the brain when we listen to
complex sounds? 2) How could this computation be modeled and implemented
in computational systems? and 3) how could one build an interface to
connect brain signals to machines? I will present results from recent
invasive neural recordings in human auditory cortex that show a
distributed representation of speech in auditory cortical areas.
This representation remains unchanged even when an interfering speaker is
added, as if the second voice is filtered out by the brain.<p> </p> In
addition, I will show how this knowledge has been successfully
incorporated in novel automatic speech processing applications and used by
DARPA and other agencies for their superior performance.<p> </p> Finally,
I will demonstrate how speech can be read directly from the brain that
eventually, can allow for communication by people who have lost their
ability to speak. This integrated research approach leads to better
scientific understanding of the brain, innovative computational
algorithms, and a new generation of Brain-Machine interfaces.
Refreshments to be served in room prior to talk.
*NOTE* This lecture will be NOT be broadcast live via the Internet. See
http://www.cs.washington.edu/news/colloq.info.html for more information.