Patient–Robot Co-Navigation of Crowded Hospital Environments: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Robotics
Contributor: ,

Intelligent multi-purpose robotic assistants have the potential to assist nurses with a variety of non-critical tasks, such as object fetching, disinfecting areas, or supporting patient care. 

  • robotic walking assistant
  • reinforcement learning
  • patient–robot co-navigation

1. Introduction

The nursing profession in the United States is critical to the provision of healthcare services and constitutes a large segment of the healthcare workforce. According to the US Bureau of Labor Statistics, the demand for nurses is expected to increase by 9% from 2020 to 2030 [1], driven by the increasing incidence of chronic diseases and the aging population’s need for healthcare services. The health and wellness (both physical and mental) of healthcare professionals, including nurses [2], are crucial to ensuring safe and high-quality healthcare delivery. A recent study [3] conducted across six countries highlights the relationship between nurse burnout and perceptions of care quality, emphasizing the importance of addressing the well-being of nurses in the healthcare system. Nurse burnout is a widespread phenomenon that can lead to decreased energy, increased fatigue, and a decline in patient care. According to Mo et al. [4], the COVID-19 pandemic has only exacerbated the situation, with nurses facing increased workloads, exposure to the virus, and the emotional stress of caring for patients in a high-stakes environment. Nurses shoulder a multitude of responsibilities, including the evaluation of a patient’s condition, documentation of medical histories, vital signs and symptoms, administration of treatments, collaboration with physicians and other healthcare practitioners, transfer or ambulation of patients, especially during post-surgery, and procurement of medical supplies. Addressing nurse burnout is essential to ensure the sustainability of the nursing workforce and to maintain safe and high-quality healthcare delivery. This can be achieved through a range of interventions, including providing adequate staffing levels, promoting work–life balance, and using robots. The utilization of robots for routine tasks, such as procurement of items or patient ambulation, allows for a reduction in the workload of nurses and enables them to concentrate on tasks that require their specialized knowledge and skills.
In recent years, to assist nurses with some of their routine tasks, with the advances in robotic technology and artificial intelligence (AI), robots have the potential to perform some of their routine tasks, which allows nurses to focus on actual patient care. In response to the COVID-19 pandemic, various robotic systems have been introduced to enhance hospital logistics, including disinfection of spaces and patient care [5,6]. These systems include robots that perform delivery tasks and object retrieval, reducing the need for human interaction in potentially contaminated areas and freeing up medical personnel to focus on more critical tasks [7], rehabilitation [8,9], walking assistance [10], and monitoring vital signs [11] that are essential components of patient care, which help improve patient outcomes and support the healing process. However, most hospitals cannot afford to acquire and assign specialized robots for each task. A unified robotic system that could cover a large range of hospital tasks is a potential solution. Accordingly, following the example of industrial applications utilizing multi-purpose robotic systems [12,13], a commercially available mobile manipulator could be reconfigured to assist nurses in a variety of tasks.

2. Robotic Walking Assistants

There are several types of robotic walking assistants (RWAs), such as wearable robots that provide gait [20,21] or balance [22,23] rehabilitation or non-wearable robotic systems that provide support during walking [24,25]. This paper focuses on RWAs that are not wearable. RWAs can have different designs, such as smart canes or walking frames, and different capabilities (e.g., obstacle avoidance, gait analysis, etc.).
Abubakar et al. [26] proposed a walking assistant robot called adaptive robotic nursing assistant (ARNA), which is a service robot that helps patients with day-to-day activities, such as walking and sitting. It comprises an armbar to sense torque and support the patient, a seven-degree-of-freedom (DOF) arm, and a multitude of sensors for obstacle detection, such as an ultrasonic sensor, a LIDAR, and an RGB-Depth (RGB-D) camera. As an additional safety measure, the robot is equipped with a bump sensor to abruptly stop when it hits an obstacle. One of ARNA’s tasks is to support a person while walking behind the robot. Although ARNA is able to detect nearby obstacles, it is not designed to navigate around them and relies on the patient to guide the robot around the obstacle. Ramanathan et al. [27] proposed a visual perception pipeline that can help patients cross the obstacle, but their pipeline mainly focuses on the task of crossing rather than maneuvering the obstacles.
Another assistive robot is called Mobile Collaborative Robotic Assistant (MOCA) [12] and comprises an omnidirectional robot with a 7-DOF robotic arm along with a soft hand as an end effector. MOCA is a collaborative robot, and its primary purpose is to collaborate with humans on industrial work, such as drilling, tasks that require teleoperation, or co-manipulation of objects. MOCA has been used as a platform to improve standing balance performance while measuring body posture [28]. However, the system has not been used for walking assistance. Moreover, Chalvatzaki et al. [24] proposed a custom-made intelligent robotic rollator called iWalk, which uses LIDAR for gait detection and an RGB-D camera for detecting if the patient loses balance and falls to the ground. While iWalk calculates the patient’s gait parameters and provides extra functionality for patient fall detection, it is still not designed to detect obstacles and maneuver the patient around them.
In the previous examples, it is obvious that the human is mainly in control of the robot, and the robot is compliant with the user’s commands. The systems presented so far focus on supporting the patient and rely on human guidance to avoid obstacles. This may burden the patient as they have to focus on their walk and, at the same time, guide the robot around obstacles. However, to achieve an intuitive interaction between the robot and the patient, a shared navigation approach is essential. This means that whenever the robot detects an obstacle (static or dynamic), it should gently steer the patient away from the obstacle without losing contact with the patient. This behavior can be called shared control between the patient and the robot. In order to avoid static obstacles and support the patient at the same time, Garcia et al. [29] proposed the use of a commercially available socially assistive robot called Pepper, which is a humanoid robot. The Pepper robot is equipped with touch sensors on the left and right shoulders that sense the pressure applied by the patient and move forward. In addition, the Pepper robot comes with built-in LIDAR, infrared, and sonar sensors used for obstacle avoidance by reducing the robot’s reliance on the user’s input. The system is able to avoid static obstacles but does not take into account dynamic obstacles, e.g., humans in the environment. Song et al. [30] proposed a shared control strategy for walking assistance for a robot. The strategy is implemented on a custom-made prototype robot that consists of a torque sensor at the arm level to sense the steering input of the patient and two LIDARs just above the ground level for gait estimation and obstacle avoidance. In this control scheme, the inputs from both LIDARs and the torque sensor are used to generate the velocity command for the robot (output), which actively guides the user in a direction that avoids collisions.
The presented robots focus on estimating gait parameters and adjusting their speed to the human walking speed and also sensing the static obstacles in the environment. Some of the work also focuses on avoiding static obstacles. However, in crowded environments, such as hospitals, humans are present, and they move around. Therefore, it is important for an RWA to have the ability to detect static and dynamic obstacles and to safely navigate in space without causing any harm and, at the same time, support the patient during walking. This is a very challenging but important task that the proposed research addresses. The following subsection focuses on crowd navigation methods for mobile robots.

3. Crowd Navigation Using Reinforcement Learning

Safe and efficient navigation through human crowds is essential for RWAs in hospitals to move from one point to another while assisting the patients. Reinforcement learning (RL) techniques have been successfully used in a variety of tasks obstacle avoidance tasks. Wenzel et al. [31] used RL to avoid obstacles, such as walls, using the images from the camera as input. RL techniques have been used to navigate a robot through crowded workspaces [32]. Choi et al. [33] presented a study where they deployed an RL algorithm called soft actor–critic algorithm (SAC) [34] on a large mobile base robot, SR7, in a real-world setting to navigate both static and dynamic obstacles. To enhance the robot’s navigational capabilities, global path planners were integrated into the system. The efficacy of the developed SAC algorithm was compared with conventional path planning approaches, such as the timed elastic band (TEB) [35] and dynamic window approach (DWA) [36]. The study revealed that the SAC algorithm demonstrated superior generalization abilities in unfamiliar environments compared to traditional path-planning techniques. This finding underscores the need to explore obstacle avoidance using RL algorithms further. To train a reinforcement learning algorithm to find an optimal path through a crowded space, a simulation environment is required.
Biswas et al. [37] proposed a simulation environment based on real-world datasets, UCY [38] and ETH [39], which are collected from a group of pedestrians at different locations. Furthermore, the authors recreate virtual surroundings that match the actual physical sites where pedestrian data were collected. Biswas et al. [37] proposed a simulation environment based on real-world datasets, such as UCY [38] and ETH [39], which are collected from a group of pedestrians at different locations. Moreover, the authors recreate virtual surroundings corresponding to the physical sites where pedestrian data were gathered. Similarly, Chen et al. [40] proposed another easy-to-use RL simulation environment that uses optimal reciprocal collision avoidance (ORCA) [41] to simulate the walking pattern of people in a crowd. Chen et al. also proposed an attention-based deep reinforcement learning method, called deep V-learning, for efficiently navigating the robot through the crowd. The deep V-learning method models human-to-human interactions that are near the robot and human–robot interactions using a self-attention mechanism for better maneuverability. The mobile robot is able to move from one point to the next while avoiding humans (dynamic obstacles). However, in the V-learning model, the robot agent cannot handle static obstacles and freezes. To tackle this issue, Liu et al. [42] proposed a framework called social obstacle avoidance using deep reinforcement learning (SOADRL) that enables a mobile robot to maneuver the crowd. SOADRL uses static maps that explicitly specify the position of static obstacles as an additional input. SOADRL only considers 2–7 pedestrians in the simulation when static obstacles are present, and the success rate is 60% for a mobile robot with a limited field of view (FOV) of 72 degrees. In addition, the deep V-learning and SOADRL methods focus on small mobile robots that are easy to maneuver and do not consider any human–robot interaction requiring physical contact, e.g., walking with a patient.
Another RL-based crowd navigation technique (called DS-RNN, and based on spatiotemporal graphs (St-graphs)) was proposed by Liu et al. [43], which outperformed many open-sourced crowd navigation strategies. St-graphs are graph-based methods for representing spatiotemporal high-level structures. The graph’s nodes typically represent input features (e.g., human body key points, location of objects, etc.), and the edges capture the spatiotemporal interactions between the features. The st-graphs have been used to model a variety of human activity detection and anticipation. For example, Jain et al. [44] used St-graphs to anticipate human actions, such as drinking, in a series of human–object interactions. Vemula et al. [45] proposed the use of st-graphs for modeling crowd behavior without any robotic interactions. The authors consider the influence of each pedestrian over all the pedestrians both in the vicinity and far away. Similarly, Liu et al. [43] used the same St-graphs approach to train a model, which is then used by a robot to anticipate the crowd movement and navigate the crowd safely. St-graph has factors that observe node and edge features at each time step and perform some computation on these features [44]. Each factor is represented by a recurrent neural network (RNN). The structure of the St-graph is modeled in such a way that the combined RNN network (node and edge RNNs) captures the relationship between spatial and temporal features to predict the optimum velocity the robot should take to reach the goal. For crowd navigation, the spatial features are the current locations of each pedestrian/human, and the temporal feature is the robot’s velocity. This technique outperforms deep V-learning in success rate, defined as the percentage of the number of times the robot successfully reaches the destination. Furthermore, the authors note that this method scales well with an increasing number of humans. However, this method does not take into account static obstacles and does not consider any human–robot interaction requiring physical contact.
All of the aforementioned RL techniques focus on mobile robots without direct physical contact with a human. However, being close to the patient and supporting the patient are crucial aspects of a walking assistant robot. In addition to that, all of the RL techniques only limit their human interactions by training their RL algorithms not to cross the person’s boundary radius. Simply not crossing the person’s boundary is not enough; the robot should also adhere to social norms while passing by people. To address this issue, Joosse et al. [46] focus on specifying explicit limits on interpersonal distance by conducting numerous social experiments where a robot approaches humans. The humans were given a survey to fill up to express their comfort/discomfort levels. The authors conclude in their research that anywhere between 0.0 and 0.45 m is a personal boundary between the humans, and the robot can pass anywhere between 1.2 and 3.6 m from the humans without raising their discomfort levels. Furthermore, the authors also state that a slow-approaching robot has a speed of 0.4 m/s and a fast-approaching robot has 1 m/s. From this paper, it can be concluded when the robot navigates the crowd, the robot’s speed should be low while passing by a human in the crowd.
The robots discussed in this section focus on a single task from a set of multiple characteristics that an autonomous walking assistant robot should possess. For example, some robots focus on gait tracking and static obstacle tracking, and some on crowd navigation. For crowd navigation, the mobile robots do not include any physical contact with a human guiding them in the crowd. However, both gait tracking and crowd navigation are essential for an RWA that could be used in hospitals. The robot walking assistant should always support the patient while walking and, at the same time, avoid static obstacles (e.g., sofas, beds, chairs, etc.) and also dynamic obstacles (e.g., crowds) to be able to work safely in hospitals.

This entry is adapted from the peer-reviewed paper 10.3390/app13074576

This entry is offline, you can click here to edit this entry!
ScholarVision Creations