Navigation Systems for the Visually Impaired: Comparison
Please note this is a comparison between Version 1 by Mohamed Dhiaeddine Messaoudi and Version 3 by Catherine Yang.

The visually impaired suffer greatly while moving from one place to another. They face challenges in going outdoors and in protecting themselves from moving and stationary objects, and they also lack confidence due to restricted mobility. Due to the recent rapid rise in the number of visually impaired persons, the development of assistive devices has emerged as a significant research field.

  • navigation method
  • blind people
  • visually impaired

1. Braille Signs Tools

Individuals with visual impairments must remember directions, as it is difficult to note them down. If the visually impaired lose their way, the main way forward is to find somebody who can help them. While Braille signs can be a decent solution here, the difficulty with this methodology is that it cannot be used as a routing tool [1][65]. These days, numerous public regions, such as emergency clinics, railroad stations, instructive structures, entryways, lifts, and other aspects of the system, are outfitted with Braille signs to simplify the route for visually impaired individuals. Regardless of how Braille characters can help visually impaired individuals know their area, they do not help to find a path.

2. Smart Cane Tools

Smart canes help the visually impaired navigate their surroundings and detect what appears in front of them, either big or small, which is impossible to detect and identify (the size) with simple walking sticks. A smart guiding cane detects the obstacle, and the microphone produces a sound in the intelligent system deployed in the cane. The cane also helps to detect a dark or bright environment [2][66]. For indoor usage, an innovative cane navigation system was proposed [3][67] that uses IoT and cloud networks. The intelligent cane navigation system is capable of collecting the data transmitted to the cloud network; an IoT scanner is also attached to the cloud network. The concept is shown in Figure 14.
Figure 14.
Smart Cane Navigation System.
The Smart Cane Navigation System comprises the camera, microcontrollers, and accelerometers that send audio messages. A cloud service is exploited in the navigation system to assist the user in navigating from one point to another. This navigation system fundamentally assists visually impaired or blind people in navigating and detecting the fastest route. Nearby objects are detected, and users are warned via a sound buzzer and a sonar [4][63]. Cloud services acquire the position of the cane and route to the destination, and these data go from the Wi-Fi Arduino board to the cloud. The system then uses a Gaussian model for the triangular-based position estimation. The cloud service is linked to the database that stores the shortest, safest, and longest paths. It outputs three lights: red, when objects are greater than 15 m; yellow, between 5 and 15 m; and green, less than 15 m. Distance is calculated by sound emission and echoes, which is the cheapest way of calculating distance, and a text-to-audio converter warns of possible hurdles or obstacles. Experiments have shown that this navigation system is quite effective in suggesting the fastest/shortest route to the users and identifying the hurdles or obstacles:
  • The system uses a cloud-based approach to navigate different routes. A Wi-Fi Arduino board in this cane connects to a cloud-based system;
  • The sound echoes and emissions are used to calculate the distance, and the user obtains a voice-form output;
  • The system was seen to be very efficient in detecting hurdles and suggesting the shortest and fastest routes to the visually impaired via a cloud-based approach.

3. Voice-Operated Tools

This outdoor voice-operated navigation system is based on G.P.S., ultrasonic sensors, and voice. This outdoor navigation system provides alerts for the current position of the users and guidance for traveling. The problem with this system is that it failed in obstacle detection and warning alerts [5][68]. Another navigation system uses a microcontroller to detect the obstacles and a feedback system that alerts the users about obstacles through voice and vibration [6][69].

4. Roshni

Roshni is an indoor navigation system that navigates through voice messages by pressing keys on a mobile unit. The position of the users in Roshni is identified by sonar technology by mounting ultrasonic modules at regular intervals on the ceiling. Roshni is portable, free to move anywhere, and unaffected by environmental changes. It needs a detailed interior map of the building that limits it only to indoor navigation [2][66].
Roshni application tools are easy to use, as the system operates by pressing mobile keys and guides the visually impaired using voice messages. Since it remains unaffected by a change in environment, it is easily transportable. The system is limited to indoor locations, and the user must provide a map of the building before the system can be used.

5. RFID-Based Map-Reading Tools

RFID is the fourth category of wireless technology used to facilitate visually impaired persons for indoor and outdoor activities. This technology is based on the “Internet of Things paradigm” through an IoT physical layer that helps the visually impaired navigate in their surroundings by deploying low-cost, energy-efficient sensors. The short communication range leaves this RFID technology incapable of being deployed in the landscape spatial range. In [7][70], an indoor navigation system for blind and older adults was proposed, based on the RFID technique, to assist disabled people by offering and enabling self-navigation in indoor surroundings. The goal of creating this approach was to handle and manage interior navigation challenges while taking into consideration the accuracy and dynamics of various environments. The system was composed of two modules for navigation and localization—that is, a server and a wearable module containing a microcontroller, ultrasonic sensor, RFID, Wi-Fi module, and voice control module. The results showed 99% accuracy in experiments. The time the system takes to locate the obstacle is 0.6 s.
Another map-reading system based on RFID provides solutions for visually disabled persons to pass through public places using an RFID tag grid, a Bluetooth interface, a RFID cane reader, and a personal digital assistant [8][71]. This system is costly, however, and there is a chance of collision in heavy traffic. A map-reading system is relatively expensive because of the hardware units it includes, and its limitation is that it is unreliable for areas with heavy traffic.
Another navigation system based on passive RFID proposed in [9][72] is equipped with a digital compass to assist the visually impaired. The RFID transponders are mounted on the floor, as tactile paving, to build RFID networks. Localization and positioning are equipped with a digital compass, and the guiding directions are given using voice commands. Table 15 incorporates detailed information about RFID-based navigation tools with recommended models for the visually impaired.
Table 15.
Summary of Various RFID-Based Navigation Tools.
summarises various studies based on the wireless networks for VI people.
Table 26.
Summary of Various Wireless-Network-Based Navigation Tools.

6. Wireless Network-Based Tools

Wireless network-based solutions for navigation and indoor positioning include various approaches, such as cellular communication networks, Wi-Fi networks, ultra-wideband (U.W.B.) sensors, and Bluetooth [15][73]. The indoor positioning is highly reliable in the wireless network approach and easy to use for blind persons. Table 2 6

7. Augmented White Cane Tools

The augmented white cane-based system is an indoor navigation system specifically designed to help the visual impaired move freely in indoor environments [19][74]. The prime purpose of the white cane navigation system is to provide real-time navigation information, which helps the users to make decisions appropriately, for example on the route to be followed in an indoor space. The system obtains access to the physical environment, called a micro-navigation system, to provide such information. Possible obstacles should be detected, and the intentional movements of the users should be known to help users decide on movements. The solution uses the interaction among several components. The main components comprising this system are the two infrared cameras. The computer has a software application, in running form, which coordinates the system. A smartphone is needed to deliver the information related to navigation, as shown in Figure 25.
Figure 25.
Augmented White Cane.
The white cane helps determine the user’s position and movement. It includes several infrared L.E.D.s with a button to activate and deactivate the system. The cane is the most suitable object to represent the position, assisted by an Augmented Objects Development Process (AODeP). To make an object augmented, many requirements can be identified: (1) the object should be able to emit the infrared light that the Wiimote could capture, (2) the user should wear it to obtain his location or position, (3) it should be smaller in size so that it does not hinder the user’s movement, and (4) it should minimize the cognitive effort required to use it:
  • The white cane provides real-time navigation by studying the physical indoor environment by using a micro-navigation system;
  • The two infrared cameras and a software application make indoor navigation more reliable and accurate;
  • The whole system takes input through infrared signals to provide proper navigation.

8. Ultrasonic Sensor-Based Tools

This ultrasonic sensor-based system comprises a microcontroller with synthetic speech output and a portable device that guides users to walking points. The principle of reflection of a high frequency is used in this system to detect obstacles. These instructions or guidelines are given in vibro-tactile form for reducing navigation difficulties. The limit of such a system is the blockage of the ultrasound signals by the wall, thus resulting in less accurate navigation. A user’s movement is constantly tracked by an RFID unit using an indoor navigation system designed for the visually impaired. The user is given the guidelines and instructions via a tactile compass and wireless connection [20][75].

9. Blind Audio Guidance Tools

The blind audio guidance system is based on an embedded system, which uses an ultrasonic sensor for measuring the distance, an A.V.R sound system for the audio instructions, and an I.R.R. sensor to detect objects as shown in Figure 36. The primary functions performed by this system are detecting paths and recognizing the environment. Initially, the ultrasonic sensors receive the visual signals and then convert them into auditory information. This system reduces the training time required to use the white cane. However, the issue concerns identifying the users’ location globally [21][76]. Additionally, Table 37 provides various blind audio guidance system features relating to Distance Measurement, Audio Instructions, and Hardware Costs.
Figure 36.
The Blind Audio Guidance System.
Table 37.
Properties of the Blind Guidance System.

10. Voice and Vibration Tools

This system is developed using an ultrasonic sensor for the detection of obstacles. People with any visual impairment or blindness are more sensitive to hearing than others, so this navigation system gives alerts via voice and vibration feedback. The system works both outdoors and indoors. The alert mobility of the users and different intensity levels are provided [22][77]. Table 48 incorporates the properties of voice and vibration navigation tools used for the visually impaired.
Table 48.
Properties of Voice and Vibration Navigation Tools.

11. RGB-D Sensor-Based Tools

This navigation system is based upon an RGB-D sensor with range expansion. A consumer RGB-D camera supports range-based floor segmentation to obtain information about the range as shown in Figure 47. The RGB sensor also supports colour sensing and object detection. The user interface is given using sound map information and audio guidelines or instructions [23][24][78,79]. Table 59 provides information on RGB-D sensor-based navigation tools with their different properties for VI people.
Figure 47.
RGB-D Sensor-Based System.
Table 59.
Properties of RGB-D Sensor-Based Navigation Tools.

12. Cellular Network-Based Tools

A cellular network system allows mobile phones to communicate with others [25][80]. According to a research study [26][81], a simple way to localize cellular devices is to use the Cell-ID, which operates in most cellular networks. Studies [27][28][82,83] have proposed a hybrid approach that uses a combination of wireless local area networks, Bluetooth, and a cellular communication network to improve indoor navigation and positioning performance. However, such positioning is unstable and has a significant navigation error due to cellular towers and radiofrequency signal range. Table 610 summarizes the information based on different cellular approaches for indoor environments with positioning factors.
Table 610.
Properties of Cellular Network-Based Navigation Tools.

13. Bluetooth-Based Tools

Bluetooth is a commonly used wireless protocol based on the IEEE 802.15.1 standard. The precision of this method is determined by the number of connected Bluetooth cells [29][84]. A 3D indoor navigation system proposed by Cruz and Ramos [30][85] is based on Bluetooth. In this navigation system, pre-installed transmitters are not considered helpful for applications with critical requirements. An approach that combines Bluetooth beacons and Google Tango was proposed in [31][86]. Table 711 provides an overview of Bluetooth-based approaches used for the visually impaired in terms of cost and environment.
Table 711.
Properties of Bluetooth-Based Navigation Tools.

14. System for Converting Images into Sound

Depth sensors generate images that humans usually acquire with their eyes and hands. Different designs convert spatial data into sound, as sound can precisely guide the users. Many approaches in this domain are inspired by auditory substitution devices that encode visual scenes from the video camera and generate sounds as an acoustic representation known as “soundscape”. Rehri et al. [32][87] proposed a system that improves navigation without vision. It is a personal guidance system based on the clear advantage of virtual sound guidance over spatial language. The authors argued that it is easy and quick to perceive and understand spatial information.
In Nair et al. [33][88], a method of image recognition was presented for blind individuals with the help of sound in a simple yet powerful approach that can help blind persons see the world with their ears. Nevertheless, image recognition using the sound process becomes problematic when the complexity of the image increases. At first, the sound is removed using Gaussian blur. In the second step, the edges of images are filtered out by finding the gradients. In the third step, non-maximum suppression is applied to trace along the image edges. After that, threshold values are marked using the canny edge detector. After acquiring complete edge information, the sound is generated.
These different technologies are very effective for blind and the visually impaired and help them feel more confident and self-dependent. They can move, travel, play, and read books more than sighted people do. Technology is growing and is enhancing the ways B.V.I. communicates to the world more confidently.

15. Infrared L.E.D.-Based Tools

Next comes the infrared L.E.D. category, suitable for producing periodic signals in indoor environments. The only drawback of this technology is that the “line of sight (L.O.S.)” must be accessible among L.E.D. and detectors. Moreover, it is a technology for short-range communication [34][89]. In this study, a “mid-range portable positioning system” was designed using L.E.D. for the visually impaired. It helps determine orientation, location, and distance to destination for those with weak eyesight and is 100% accurate for the partially blind. The system comprises two techniques: infrared intensity and ultrasound “time of flight T.O.F.”. The ultrasonic T.O.F. structure comprises an ultrasonic transducer, beacon, and infrared L.E.D. circuits.
On the other hand, the receiver includes an ultrasonic sensor, infrared sensor, geomagnetic sensor, and signal processing unit. The prototype also includes beacons of infrared L.E.D. and receivers. The system results showed 90% accuracy for the fully visually impaired in indoor and outdoor environments.

15.1. Text-to-Speech Tools

One of the most used and recently developed Text-to-Speech Tools (TTS) is the Google TTS, a screen reader application that Google has designed and that uses Android O.S.S. to read the text out loud over the screen. It supports various languages. This device was built entirely based upon “DeepMind’s speech” synthesis expertise. In this tool, the API sends the audio or voice to the person in almost human voice quality (Cloud Text-to-Speech, n.d). OpenCV tools and libraries have been used to capture the image, from which the text is decoded, and the character recognition processes are then completed. The written text is encoded through the machine using O.C.R. technology. The OpenCV library was recommended for its convenience of handling and use compared to the other P.C.C. or electronic devices platforms [35][90]. An ultrasonic sensor-based TTS tool was designed to give vibration sensing for the blind to help them to easily recognize and identify a bus name and its number at a bus stop using audio processing techniques. The system was designed using M.A.T.L.A.B. for implementing image capture. Most simulations are performed using O.C.R. in M.A.T.L.A.B. to convert the text into speech [36][91]. A text-to-voice technique is also presented; most of the commercial G.P.S. devices developed by Inc., such as TomTom Inc., Garmin Inc., etc., use this technique. Real-time performance is achieved based on the spoken navigation instructions [37][92]. An ideal illustration of combining text-to-voice techniques and voice search methods is Siri, which is shown in Figure 58 Siri is available for iOS, an operating system (O.S.S.) for Apple’s iPhone. It is easy to interact or talk to Siri and receive a response in a human-like voice. This system helps people with low vision and people who are blind or visually impaired to use it in daily life with both voice input and synthesized voice output.
Figure 58.
Siri for iPhone.
A Human–Computer Interface (HCI)-based wearable indoor navigation system is presented by [38][93]. An excellent example of an audio-based system is Google Voice search. To effectively use such systems, proper training is required [39][40][26,94].

15.2. Speech-to-Text Tools

Amazon has designed and developed an S.T.T. tool named “Amazon transcribe” that uses the deep learning algorithm known as A.S.R., which converts the voice into text in a matter of seconds and does so precisely. These tools are used by the blind and the visually impaired, and they are also used to translate customer service calls, automate subtitles, and create metadata for media assets to generate the searchable archive [41][15]. I.B.M. Watson has also developed its own S.T.T. tool to convert audio and voice to text form. The developed technology uses the DL AI algorithm, which applies the language structure, grammar, and composition of voice and audio signals to transcribe and convert the human voice/audio into written text [42][95].
Table 812 shows the complete evaluation and analysis of current systems to help visually impaired users confidently move in their environment.

16. Feedback Tools for VI

In this section, different feedback tools for VI people are explained.

16.1. Tactical Compass

The feedback for effective and accurate direction in an electronic travel aid for VI is a challenging task. The authors of [43][96] presented a Tactile Compass to guide the VI person during the traveling to address this problem. This compass is a handheld device that guides a VI person continuously by providing directions with the help of a pointing needle. Two lab experiments that tested the system demonstrated that a user can reach the goal with an average deviation of 3.03°.

16.2. SRFI Tool

To overcome the information acquisition problem in VI people, the authors of [44][97] presented a method based on auditory feedback and a triboelectric nanogenerator. This tool is called a ripple-inspired fingertip interactor (S.R.F.I.) and is self-powered. It assists the VI person by giving feedback to deliver information, and due to its refined structure, it gives high-quality text information to the user. Based on three channels, it can recognise Braille letters and deliver feedback to VI about information acquisition.

16.3. Robotic System Based on Haptic Feedback

To support VI people in sports, the authors of [45][98] introduced a robotic system based on haptic feedback. The runner’s position is determined with the help of a drone, and information is delivered with the help of the left lower leg haptic feedback, which guides the user in the desired direction. The system is assessed outdoors to give proper haptic feedback and is tested on three modalities: vibration during the swing, stance, and continuous.

16.4. Audio Feedback-Based Voice-Activated Portable Braille

A portable device named Voice Activated Braille helps give a VI person information about specific characters. Arduino helps direct the VI person. This system can be beneficial for VI by guiding them. It is a partial assistant and helps the VI to read easily.

16.5. Adaptive Auditory Feedback System

This system helps a VI person while using the desktop, based on continuous switching between speech and the non-speech feedback. Using this system, a VI person does not need continuous instructions. The results of sixteen experiments that assessed the system revealed that it delivers an efficient performance.

16.6. Olfactory and Haptic for VI person

The authors of [46][99] introduced a method based on Olfactory and Haptic for VI people. This method is introduced to help VI in entertainment in education and is designed to offer opportunities to VI for learning and teaching. A 3D system, it can be used to touch a 3D object. Moreover, the smell and sound are released from the olfactory device. This system was assessed by the VI and blind people with the help of a questionnaire.

16.7. Hybrid Method for VI

This system guides the VI person in indoor and outdoor environments. A hybrid system based on the sensor and traditional stick, it guides the user with the help of a sensor and auditory feedback.

16.8. Radar-Based Handheld Device for VI

A handheld device based on radar has been presented for VI people. In this method, the distance received by the radar sensor is converted to tactile-based information, which is mapped into the array based on the vibration actuators. With the help of an information sensor and tactile stimulus, the VI user can be guided around obstacles. Table 812 gives a detailed overview of the navigation application for the visually impaired. The study involves various models and applications with their feasibility and characteristics report and discusses the merits and drawbacks of every application along with the features.
Table 812.
Navigation Applications for Visually Impaired Users.
Video Production Service