This month's eastside meeting features a talk on Automatic Speech Recognition in iOS and other mobile applications. Please note that we are not meeting in our usual Redmond/Thinkspace location this month. We will meet in Google, Kirkland. See bottom of this post for more details.
Here is a summary of the talk and an intro from our speaker
"Since the advent of Siri, the use of speech recognition (ASR) in mobile applications has seen measurable growth both within the iOS ecosystem and without and has pushed public awareness (and in many cases, acceptance) of the technology to a new level. This talk will trace the history behind this development, while aiming to demystify the underlying technical underpinnings of ASR. It will also explore how current speech applications also make use closely related technologies, primarily natural language processing and speech synthesis. While not a practicum, this session will feature several live demonstrations with the goal of revealing the intriguing boundaries between art and science that lie behind a successful and well-designed speech app and discussing what it takes to create such an application. Focus will be on iOS but will include mention of other platforms. "
"Alexander Caskey has been working with with speech and natural language processing for over 20 years, with stops along the way at Microsoft Research, Wildfire (which premiered the first speech-driven personal assistant in 1992), Cisco, and Linguistic Technology. He is currently at work on developing a mobile platform that uses speech technology to collect, store, and distribute critical information for emergency relief efforts in areas of the world where the cell phone is the only viable means of communication."