Skip to content

Designing Augmented Reality Systems to Empower People with Low Vision

Photo of Thomas Logan
Hosted By
Thomas L.
Designing Augmented Reality Systems to Empower People with Low Vision

Details

Low vision is a visual impairment that falls short of blindness but cannot be corrected by eyeglasses or contact lenses. It is a pervasive disability and has brought a lot of difficulties to people’s daily life. Unlike people who are blind, low vision people have residual vision and extensively use their vision in daily activities. However, prior research has mainly focused on providing only audio and tactile feedback for blind users, overlooking low vision people’s residual vision and unique needs. To address this gap in low vision accessibility, I seek to leverage the residual vision that low vision people have, and design intelligent augmented reality (AR) systems that provide direct visual augmentations according to the users' visual abilities and tasks.

In this talk, I will discuss how I design tailored visual augmentations and build AR systems to assist low vision people in different daily tasks, such as the visual search and stair navigation tasks. For example, I built a head-mounted AR system that presented visual cues to orient users’ attention in a visual search task, as well as a projection-based AR system that projected visual highlights on the stair edges to support safe stair navigation. I will conclude my talk by highlighting my future research directions, such as building AR systems for multi-user scenarios (e.g., social interaction) and diverse disabilities (e.g., autism).

Presenter Bio:
Yuhang Zhao is a postdoctoral researcher at Cornell Tech, and will be joining the CS department at the University of Wisconsin-Madison as an assistant professor in January 2021. Her research interests lie in human-computer interaction (HCI), accessibility, and augmented and virtual reality. She designs and builds intelligent interactive systems to enhance human abilities. She has published at many top-tier conferences and journals in the field of HCI (e.g., CHI, UIST, ASSETS), and has received 3 U.S. and international patents. Her work received two best paper honorable mention awards at the SIGACCESS Conference on Computers and Accessibility (ASSETS) and has been covered by various media outlets (e.g., TNW, New Scientist). She received her Ph.D. degree in Information Science at Cornell University.

Location:
Mozilla Hubs Space
https://hubs.mozilla.com/G6edX79/a11yvr-project/

YouTube Channel https://www.youtube.com/channel/UCqhCc1b6Cq69eg-iYeVKOog

Timeline:
Please NOTE: Meetup is not skilled in representing timezones. This Meetup is set in Tokyo, Japan so it will display differently than your local time zone. When you RSVP, use the "Add to calendar" link to receive a calendar invite transcribed to your time zone.

This timeline is based on EDT time zone.
7:00 - 7:10
Login/explore/troubleshooting audio and video

7:10 - 7:15
Thomas Logan introduction to A11yVR

7:15 - 7:50
Presentation

7:50 - 8:10
Q&A with attendees

8:10 -9:00
Breakout sessions in small groups to get to know each other in virtual reality. It is up to you if you want to stay and continue meeting other people in the room.

Accessibility Details:
Mozilla Hubs holds weekly office hours that can be used to practice and gain an understanding of how to interact in Mozilla Hubs.

We will have captions provided by Stanley Sakai and have the real time text translation displayed in Hubs. We will have video streaming of the event to YouTube via a partnership with the Internet Society Accessibility SIG (a11ySIG).

If you are unable to join via Hubs, we encourage you to watch the stream on YouTube and ask questions. As we begin meeting regularly each month we will capture feedback on ways we can be more accessible and inclusive in virtual reality and start incorporating them into a set of Best Practices.

Photo of A11yVR - Accessibility Virtual Reality Group group
A11yVR - Accessibility Virtual Reality Group
See more events