- When: Thursday, February 13, 2020 from 11:00 AM to 12:00 PM
- Speakers: Yuhang Zhao
- Location: Engineering Building 4201
- Export to iCal
Abstract:
Visually impaired people are marginalized by inaccessible social infrastructure and technology, facing severe challenges in all aspects of their life. Mixed reality (MR) technology has the capability of incorporating virtual information into the physical environment, presenting a unique opportunity to augment the world for visually impaired people. However, it also creates a virtual world that is currently vision dominant, which can cause more accessibility issues. I strive to explore how MR technology can empower visually impaired people, providing them equal access to both the real and virtual worlds. In this talk, I will focus on people with low vision, who have visual impairments but are not blind. I will discuss how I leverage MR technology to address both the real-world challenges and the virtual world accessibility for low vision. To solve the real-world challenges that low vision people face, I design and build intelligent MR systems to directly enhance their visual ability by providing visual augmentations. For example, I built a head-mounted MR system that presented visual cues to orient users’ attention in a visual search task, as well as a projection-based MR system that projected visual highlights on the stair edges to support safe stair navigation. Meanwhile, to foster the accessibility of the virtual world generated by MR, I adapted the realworld low vision aids and technology to the virtual world, creating a set of tools that enhance virtual reality (VR) applications for low vision people. To universally apply these tools, I developed both a plugin to modify an existing VR application post hoc, and a Unity toolkit that enables developers to build more accessible VR applications. I will conclude my talk by highlighting my future research directions, such as building MR systems for multi-user scenarios (e.g., social interaction) and diverse disabilities (e.g., autism), and constructing general MR accessibility frameworks.
Short Bio:
Yuhang Zhao is a sixth-year PhD candidate in Information Science at Cornell University. Her research interests lie in human-computer interaction (HCI), accessibility, and augmented and virtual reality. She designs and builds intelligent interactive systems to enhance human abilities. She has published at many top-tier conferences and journals in the field of HCI (e.g., CHI, UIST, ASSETS), and has received 3 U.S. and international patents. She has interned at Facebook, Microsoft Research, and Microsoft Research Asia. Her work received two best paper honorable mention awards at the SIGACCESS Conference on Computers and Accessibility (ASSETS) and has been covered by various media outlets (e.g., TNW, New Scientist). She received her B.A. degree and M.S. degree with distinction on thesis in Computer Science at Tsinghua University.