Join us on September 19th as we explore facial recognition and emotional recognition as components of the AR ecosystem.
How Facial/Emotion Recognition Technology Works
Forest Handford (https://www.linkedin.com/in/foresthandford) - DevOps Lead @ Affectiva (http://www.affectiva.com/)
Formalized emotion recognition dates back to the 1800s when French neurologist Duchenne de Boulogne identified that real smiles (now known as Duchenne smiles) and fake smiles used different muscles. In the 70s, Paul Ekman was the first to publish and popularize the Facial Action Coding System (FACS) that allowed people to manually code expressions and emotions. Today, this system is automated, allowing anyone with a stock webcam to detect the emotions of people. Forest will show how this technology works today and what augmented reality applications there are.
Reading the Face under Uncertainty
Dr. Y. Raymond Fu (http://www1.ece.neu.edu/~yunfu/) - Associate Professor @ Northeastern University, COE/CCIS
Dr. Fu will mainly present his recent and ongoing research/projects in the field of face recognition and envision future research trend in social media oriented facial image analytics. Particularly, a general computational methodology for graph embedded dimensionality reduction will be introduced as well as solutions for computability, stability, and complexity under uncertain variations. Extensive real-world applications/demos and research projects will also be demonstrated.
6:00 - Doors open, demos begin, pizza and soda is served thanks to Akamai
6:30 - 7:00 - Intro and AR update by Neil Gupta
7:00 - 7:45 - Presentations on Facial/Emotion Recognition in AR applications
7:45 - 9:00 - Demonstrations and Hands-on with Hololens
Akamai Technology is generous enough to host the BostonAR meetup events, and is committed to diversity in everything they do. Learn more about the #3 top place to work in 2015 here: https://www.akamai.com/us/en/about/careers/workplace-diversity.jsp