Across the world, an increasing number of people experience vision impairments or blindness. For people with healthy eyes, it may be difficult to discern how the world looks to a person with impaired eyesight. Medical explanations in books and articles, descriptions from patients, and 2D images of impaired vision are often insufficient to communicate the negative effects of vision impairments and to truly make someone understand how the world looks through the eyes of a visually impaired person. Relatives, employers, and even the medical providers of those impacted by eye diseases may benefit from simulations of vision impairments, to increase understanding, sympathy, and empathy. Research on digital vision-impairment simulations has increased. With the rise of emerging technologies such as extended reality (XR), simulations can now be made more realistic and immersive than ever before. ThisA document provides a cocomprehensive overview of the state-of-the-art in simulating vision impairments is provided using a variety of hardware and software solutions that address different conditions and impairments. It also discusses the role of XR solutions in increasing sympathy and empathy for people with vision impairments.
1. XR and Vision Impairments
Over the last few decades, there have been a number of research and industrial efforts to accurately simulate vision impairments. This earlier work fits broadly into three categories: physical goggles with special lenses meant to simulate particular conditions, overlays of videos and simulations viewed on 2D displays, and XR systems that present dynamic representations of the impairments. These systems usually targeted specific conditions and were often limited in their realism, immersiveness, and adjustability.
Much of the earliest work, dating to the 1940s, used physical goggles and other obstructions to artificially reduce visual capabilities, with the goal of helping people with normal sight better understand the impact of low vision
[1][2]. Zimmerman’s Low Vision Simulation Kit
[3] used goggles with exchangeable lenses to represent different conditions. Later, Zagar and Baggarly
[4] also utilized goggles to help student pharmacists understand how patients with various ocular diseases and vision impairments might interact with medication. Specific goggles were created per condition, including glaucoma, cataracts, macular degeneration, diabetic retinopathy, and retinitis pigmentosa.
Over the last decade, these types of physical instruments have been used to study how different conditions impact a patient’s life. Wood et al.
[5] used goggles to help understand the impact of cataracts and refractive blur on night driving, showing the heaviest degradation in capability caused by the former. Hwang et al.
[6] built on this and used a Bangerter Occlusion Foil on plano lenses to simulate cataracts, showing how headlight-induced glare reduces pedestrian visibility for those with the condition.
With the introduction of more powerful computing and display technologies, researchers moved to developing a number of static and dynamic 2D simulations. Greenberg
[7] recreated the Farnsworth–Munsell 100-hue test using computer monitors to assess color vision deficiency, with a particular focus on presenting images as they would be seen by dichromats. Later, Brettel et al.
[8] and Viénot et al.
[9] both presented simulations of dichromacy by reprojecting images using LMS color spaces for viewing on monitors. Expanding to additional conditions, Banks and Crindle
[10] recreated visual effects of several ocular diseases (including glaucoma and AMD) by creating overlays and filters for 2D images viewed on a desktop display.
These technologies also help application and web developers to better accommodate impaired users. Simulators such as VisionSimulations.com
[11], Goodman-Deane et al.
[12], and Leventhal
[13] allow developers to assess the accessibility of websites, with the latter visualizing visual snow, glare, cataracts, and floaters, among others. Over the last 15 years, a new generation of tools has appeared, taking advantage of the emergence of 3D game engines to create more dynamic and immersive simulations. Lewis et al.
[14][15] presented a set of simulations built using post-processing effects rendered in Unreal Engine 3, allowing for the simulation of several conditions in explorable 3D environments. Experts found the simulations educational and useful, even given the limitations on the customizability of the visualizations.
In the last decade, XR technology has become more immersive, powerful, and affordable, running on commodity consumer-grade hardware, with a number of manufacturers integrating eye-tracking technology. This novel technology allows for simulations with the immersiveness of goggles and the fidelity and customizability of 3D desktop systems. Additionally, the introduction of eye tracking facilitates the creation of much more dynamic visualizations that respond to a user’s gaze. For many conditions (e.g., glaucoma), this is critical to properly representing their appearance.
Werfel et al.
[16], Väyrynen et al.
[17], and Zavlanou and Lanitis
[18] all developed VR simulations of vision impairments. While immersive and representative of the conditions, they were limited in their ability to tune symptoms to virtually change the severity and characteristics of the conditions. Additionally, state-of-the-art hardware has allowed for much more realistic rendering. Jones and Ometto
[19] utilized eye tracking to achieve near real-time rendering for their XR simulations of disability glare, blur, spatial distortions, and perceptual filling-in, as well as defects in color vision. Furthermore, in their later work
[20], they simulated glaucoma with gaze-dependent region blur, generated using perimetric data of a real patient. In a similar vein, Zhang et al. presented an AR system that allowed for the simulation of peripheral vision loss
[21], utilizing monochrome LCDs to create virtual blind spots overlaid on the user’s view of the real world.
Commercially available apps (e.g., the Novartis ViaOpta Simulator
[22] and VisionSim
[23] from the Braille Institute) simulate vision impairments on a smartphone screen but are, consequently, monoscopic, limited in field of view, and unable to properly represent gaze-dependent effects.
XREye
[24] builds on all of these approaches by simulating complex eye diseases, such as AMD, cornea disease, and achromatopsia, comprising multiple symptoms in real time with eye tracking, making it possible to use these simulations in XR HWDs while minimizing the risk of VR sickness. Most of the aforementioned earlier simulations are often simplifications of the respective vision impairments they aim to simulate. In an attempt to create more realistic representations, Krösl et al.
[24] collaborated closely with ophthalmologists and modeled their simulations on the expertise of the medical experts and the reports from patients. With these additional considerations, XREye achieves more sophisticated visualizations of complex eye disease patterns.
2. XR for Sympathy and Empathy
The immersive factor of XR and the ability to simulate different spatial contexts and viewpoints for users has increased research interest in XR as an empathy-enabling medium over the past decade
[25]. In broad terms, empathy can be defined as the capacity of feeling another person’s emotions. It is deeper than sympathy, the capacity to feel with the other (i.e., not the same feeling of the other, but rather a feeling of sorrow and care for the emotion of others)
[26]. Sympathy and empathy are interconnected, but it is the possibility of “seeing through the eyes of others” using XR HWDs that makes the technology especially interesting for researching empathetic responses. For example, VR has become a prominent medium for nonfiction storytelling, with many documentaries making viewers experience the position of the subject of the story, aiming for an empathetic connection
[27].
Empathy is a key factor in understanding the lives and symptoms of people with disabilities or age-related diseases in order to provide appropriate treatment. Therefore, the impact of using XR to foster empathy in students of medicine, nursing, and health sciences has been a topic of research. Campbell et al.
[28] reported statistically relevant changes in pre/post surveys and improvement trends in empathy levels for Alzheimer’s dementia patients in undergraduate baccalaureate nursing students. Adefila et al.
[29] also reported a statistically significant improvement in compassion among students for dementia patients in a pilot study in a faculty of health and life sciences. The study also interviewed the participants three months after the experience and collected testimonials on their interactions with dementia patients. Insights gained from interviews suggest that a VR experience might change work practices in the long term. Oh et al.
[30] discuss the potential limitations of immersive technologies in fostering empathy. In their investigation of how embodied perspective-taking may help mitigate ageism and intergenerational bias, they found a more significant effect for participants who embodied an elderly person in an immersive virtual environment compared to those who engaged in a traditional perspective-taking exercise via mental simulation when a threat was presented without intergroup contact. However, they could not find a substantial difference between the two methods in the participants’ behaviors when they were exposed to a concrete and experiential intergroup threat. Paananen et al.
[25] presented a systematic literature survey of recent work on empathy in XR and pointed out that, while current research presents evidence of XR experiences generating empathetic responses in the users, this impact tends to be limited, and not much research has been carried out to design guidelines for applications that foster long-term empathy in users.
The work by Guarese et al.
[31], which focused on assistive technology, presented an approach for sympathy and empathy evaluations based on the
Ad Response Sympathy (ARS) and
Ad Response Empathy (ARE) tests from Escalas and Stern
[26]. Guarese et al. developed an AR device as an assistive technology for micro-guidance in simple object-placement tasks in a kitchen environment. They evaluated its effect on sympathy and empathy in an experiment with blindfolded individuals. To gather data, they conducted an online survey as a pre-test to collect sighted people’s self-perception of sympathy and empathy for individuals with vision impairments. They also conducted a post-test with another group of sighted people after testing the developed assistive technology solution. Additionally, they assessed blind and visually impaired people’s perception of sympathy and empathy from others and compared these assessments, along with data from the pre-test and post-test. The results showed a significant increase in sympathetic and empathetic responses from the sighted people who participated in the experiment.
In an attempt to allow designers to test the usability and accessibility of their websites and desktop applications, Choo et al.
[32] presented
Empath-D to enable
Empathetic User Interface Design. Using a Samsung Gear VR with a Galaxy Note 4, their application was aimed at helping designers account for motor problems. They could simulate a set of conditions by utilizing impairment profiles, with the ability to represent interactions of a person with a certain impediment, as well as visual impairments. This was later extended by Kim et al.
[33] to work in a virtual environment for the design of smartphone apps.
Empath-D was more recently evaluated by Choo et al.
[34] using an augmented virtuality impairment simulation of a mockup of the Instagram app. The goal was to investigate usability challenges for patients with cataracts when using mobile apps. While the intensity of each effect was adjustable at run-time, the included impairment simulations were mostly simplified versions of the respective eye diseases, without eye tracking. Even though this system was developed to facilitate empathetic user interface design, its effects on the empathy levels of users have not been evaluated.
Building on the sympathy and empathy questionnaire developed by Guarese et al.
[31], Krösl et al
. [35]. conducted a study to evaluate the potential of their vision-impairment simulation system, XREye, to increase sympathy and empathy and assess its educational value. In the user study, 56 educators and education students either tested the vision impairment simulations themselves, using a VR HWD or observed another user's XR view projected on a wall. The study showed an increase in sympathy and empathy after experiencing XREye, along with positive feedback regarding its educational value. The authors suggest that vision-impairment simulations, such as XREye, can be used for educational purposes to increase awareness for the challenges people with vision impairments face in their everyday lives.