Patient–Robot Co-Navigation of Crowded Hospital Environments: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: ,

Intelligent multi-purpose robotic assistants have the potential to assist nurses with a variety of non-critical tasks, such as object fetching, disinfecting areas, or supporting patient care. 

  • robotic walking assistant
  • reinforcement learning
  • patient–robot co-navigation

1. Introduction

The nursing profession in the United States is critical to the provision of healthcare services and constitutes a large segment of the healthcare workforce. According to the US Bureau of Labor Statistics, the demand for nurses is expected to increase by 9% from 2020 to 2030 [1], driven by the increasing incidence of chronic diseases and the aging population’s need for healthcare services. The health and wellness (both physical and mental) of healthcare professionals, including nurses [2], are crucial to ensuring safe and high-quality healthcare delivery. A recent study [3] conducted across six countries highlights the relationship between nurse burnout and perceptions of care quality, emphasizing the importance of addressing the well-being of nurses in the healthcare system. Nurse burnout is a widespread phenomenon that can lead to decreased energy, increased fatigue, and a decline in patient care. According to Mo et al. [4], the COVID-19 pandemic has only exacerbated the situation, with nurses facing increased workloads, exposure to the virus, and the emotional stress of caring for patients in a high-stakes environment. Nurses shoulder a multitude of responsibilities, including the evaluation of a patient’s condition, documentation of medical histories, vital signs and symptoms, administration of treatments, collaboration with physicians and other healthcare practitioners, transfer or ambulation of patients, especially during post-surgery, and procurement of medical supplies. Addressing nurse burnout is essential to ensure the sustainability of the nursing workforce and to maintain safe and high-quality healthcare delivery. This can be achieved through a range of interventions, including providing adequate staffing levels, promoting work–life balance, and using robots. The utilization of robots for routine tasks, such as procurement of items or patient ambulation, allows for a reduction in the workload of nurses and enables them to concentrate on tasks that require their specialized knowledge and skills.
In recent years, to assist nurses with some of their routine tasks, with the advances in robotic technology and artificial intelligence (AI), robots have the potential to perform some of their routine tasks, which allows nurses to focus on actual patient care. In response to the COVID-19 pandemic, various robotic systems have been introduced to enhance hospital logistics, including disinfection of spaces and patient care [5][6]. These systems include robots that perform delivery tasks and object retrieval, reducing the need for human interaction in potentially contaminated areas and freeing up medical personnel to focus on more critical tasks [7], rehabilitation [8][9], walking assistance [10], and monitoring vital signs [11] that are essential components of patient care, which help improve patient outcomes and support the healing process. However, most hospitals cannot afford to acquire and assign specialized robots for each task. A unified robotic system that could cover a large range of hospital tasks is a potential solution. Accordingly, following the example of industrial applications utilizing multi-purpose robotic systems [12][13], a commercially available mobile manipulator could be reconfigured to assist nurses in a variety of tasks.

2. Robotic Walking Assistants

There are several types of robotic walking assistants (RWAs), such as wearable robots that provide gait [14][15] or balance [16][17] rehabilitation or non-wearable robotic systems that provide support during walking [18][19]. This research focuses on RWAs that are not wearable. RWAs can have different designs, such as smart canes or walking frames, and different capabilities (e.g., obstacle avoidance, gait analysis, etc.).
Abubakar et al. [20] proposed a walking assistant robot called adaptive robotic nursing assistant (ARNA), which is a service robot that helps patients with day-to-day activities, such as walking and sitting. It comprises an armbar to sense torque and support the patient, a seven-degree-of-freedom (DOF) arm, and a multitude of sensors for obstacle detection, such as an ultrasonic sensor, a LIDAR, and an RGB-Depth (RGB-D) camera. As an additional safety measure, the robot is equipped with a bump sensor to abruptly stop when it hits an obstacle. One of ARNA’s tasks is to support a person while walking behind the robot. Although ARNA is able to detect nearby obstacles, it is not designed to navigate around them and relies on the patient to guide the robot around the obstacle. Ramanathan et al. [21] proposed a visual perception pipeline that can help patients cross the obstacle, but their pipeline mainly focuses on the task of crossing rather than maneuvering the obstacles.
Another assistive robot is called Mobile Collaborative Robotic Assistant (MOCA) [12] and comprises an omnidirectional robot with a 7-DOF robotic arm along with a soft hand as an end effector. MOCA is a collaborative robot, and its primary purpose is to collaborate with humans on industrial work, such as drilling, tasks that require teleoperation, or co-manipulation of objects. MOCA has been used as a platform to improve standing balance performance while measuring body posture [22]. However, the system has not been used for walking assistance. Moreover, Chalvatzaki et al. [18] proposed a custom-made intelligent robotic rollator called iWalk, which uses LIDAR for gait detection and an RGB-D camera for detecting if the patient loses balance and falls to the ground. While iWalk calculates the patient’s gait parameters and provides extra functionality for patient fall detection, it is still not designed to detect obstacles and maneuver the patient around them.
In the previous examples, it is obvious that the human is mainly in control of the robot, and the robot is compliant with the user’s commands. The systems presented so far focus on supporting the patient and rely on human guidance to avoid obstacles. This may burden the patient as they have to focus on their walk and, at the same time, guide the robot around obstacles. However, to achieve an intuitive interaction between the robot and the patient, a shared navigation approach is essential. This means that whenever the robot detects an obstacle (static or dynamic), it should gently steer the patient away from the obstacle without losing contact with the patient. This behavior can be called shared control between the patient and the robot. In order to avoid static obstacles and support the patient at the same time, Garcia et al. [23] proposed the use of a commercially available socially assistive robot called Pepper, which is a humanoid robot. The Pepper robot is equipped with touch sensors on the left and right shoulders that sense the pressure applied by the patient and move forward. In addition, the Pepper robot comes with built-in LIDAR, infrared, and sonar sensors used for obstacle avoidance by reducing the robot’s reliance on the user’s input. The system is able to avoid static obstacles but does not take into account dynamic obstacles, e.g., humans in the environment. Song et al. [24] proposed a shared control strategy for walking assistance for a robot. The strategy is implemented on a custom-made prototype robot that consists of a torque sensor at the arm level to sense the steering input of the patient and two LIDARs just above the ground level for gait estimation and obstacle avoidance. In this control scheme, the inputs from both LIDARs and the torque sensor are used to generate the velocity command for the robot (output), which actively guides the user in a direction that avoids collisions.
The presented robots focus on estimating gait parameters and adjusting their speed to the human walking speed and also sensing the static obstacles in the environment. Some of the work also focuses on avoiding static obstacles. However, in crowded environments, such as hospitals, humans are present, and they move around. Therefore, it is important for an RWA to have the ability to detect static and dynamic obstacles and to safely navigate in space without causing any harm and, at the same time, support the patient during walking. This is a very challenging but important task that the proposed research addresses. The following subsection focuses on crowd navigation methods for mobile robots.

3. Crowd Navigation Using Reinforcement Learning

Safe and efficient navigation through human crowds is essential for RWAs in hospitals to move from one point to another while assisting the patients. Reinforcement learning (RL) techniques have been successfully used in a variety of tasks obstacle avoidance tasks. Wenzel et al. [25] used RL to avoid obstacles, such as walls, using the images from the camera as input. RL techniques have been used to navigate a robot through crowded workspaces [26]. Choi et al. [27] presented a study where they deployed an RL algorithm called soft actor–critic algorithm (SAC) [28] on a large mobile base robot, SR7, in a real-world setting to navigate both static and dynamic obstacles. To enhance the robot’s navigational capabilities, global path planners were integrated into the system. The efficacy of the developed SAC algorithm was compared with conventional path planning approaches, such as the timed elastic band (TEB) [29] and dynamic window approach (DWA) [30]. The study revealed that the SAC algorithm demonstrated superior generalization abilities in unfamiliar environments compared to traditional path-planning techniques. This finding underscores the need to explore obstacle avoidance using RL algorithms further. To train a reinforcement learning algorithm to find an optimal path through a crowded space, a simulation environment is required.
Biswas et al. [31] proposed a simulation environment based on real-world datasets, UCY [32] and ETH [33], which are collected from a group of pedestrians at different locations. Furthermore, the authors recreate virtual surroundings that match the actual physical sites where pedestrian data were collected. Biswas et al. [31] proposed a simulation environment based on real-world datasets, such as UCY [32] and ETH [33], which are collected from a group of pedestrians at different locations. Moreover, the authors recreate virtual surroundings corresponding to the physical sites where pedestrian data were gathered. Similarly, Chen et al. [34] proposed another easy-to-use RL simulation environment that uses optimal reciprocal collision avoidance (ORCA) [35] to simulate the walking pattern of people in a crowd. Chen et al. also proposed an attention-based deep reinforcement learning method, called deep V-learning, for efficiently navigating the robot through the crowd. The deep V-learning method models human-to-human interactions that are near the robot and human–robot interactions using a self-attention mechanism for better maneuverability. The mobile robot is able to move from one point to the next while avoiding humans (dynamic obstacles). However, in the V-learning model, the robot agent cannot handle static obstacles and freezes. To tackle this issue, Liu et al. [36] proposed a framework called social obstacle avoidance using deep reinforcement learning (SOADRL) that enables a mobile robot to maneuver the crowd. SOADRL uses static maps that explicitly specify the position of static obstacles as an additional input. SOADRL only considers 2–7 pedestrians in the simulation when static obstacles are present, and the success rate is 60% for a mobile robot with a limited field of view (FOV) of 72 degrees. In addition, the deep V-learning and SOADRL methods focus on small mobile robots that are easy to maneuver and do not consider any human–robot interaction requiring physical contact, e.g., walking with a patient.
Another RL-based crowd navigation technique (called DS-RNN, and based on spatiotemporal graphs (St-graphs)) was proposed by Liu et al. [37], which outperformed many open-sourced crowd navigation strategies. St-graphs are graph-based methods for representing spatiotemporal high-level structures. The graph’s nodes typically represent input features (e.g., human body key points, location of objects, etc.), and the edges capture the spatiotemporal interactions between the features. The st-graphs have been used to model a variety of human activity detection and anticipation. For example, Jain et al. [38] used St-graphs to anticipate human actions, such as drinking, in a series of human–object interactions. Vemula et al. [39] proposed the use of st-graphs for modeling crowd behavior without any robotic interactions. The authors consider the influence of each pedestrian over all the pedestrians both in the vicinity and far away. Similarly, Liu et al. [37] used the same St-graphs approach to train a model, which is then used by a robot to anticipate the crowd movement and navigate the crowd safely. St-graph has factors that observe node and edge features at each time step and perform some computation on these features [38]. Each factor is represented by a recurrent neural network (RNN). The structure of the St-graph is modeled in such a way that the combined RNN network (node and edge RNNs) captures the relationship between spatial and temporal features to predict the optimum velocity the robot should take to reach the goal. For crowd navigation, the spatial features are the current locations of each pedestrian/human, and the temporal feature is the robot’s velocity. This technique outperforms deep V-learning in success rate, defined as the percentage of the number of times the robot successfully reaches the destination. Furthermore, the authors note that this method scales well with an increasing number of humans. However, this method does not take into account static obstacles and does not consider any human–robot interaction requiring physical contact.
All of the aforementioned RL techniques focus on mobile robots without direct physical contact with a human. However, being close to the patient and supporting the patient are crucial aspects of a walking assistant robot. In addition to that, all of the RL techniques only limit their human interactions by training their RL algorithms not to cross the person’s boundary radius. Simply not crossing the person’s boundary is not enough; the robot should also adhere to social norms while passing by people. To address this issue, Joosse et al. [40] focus on specifying explicit limits on interpersonal distance by conducting numerous social experiments where a robot approaches humans. The humans were given a survey to fill up to express their comfort/discomfort levels. The authors conclude in their research that anywhere between 0.0 and 0.45 m is a personal boundary between the humans, and the robot can pass anywhere between 1.2 and 3.6 m from the humans without raising their discomfort levels. Furthermore, the authors also state that a slow-approaching robot has a speed of 0.4 m/s and a fast-approaching robot has 1 m/s. From this paper, it can be concluded when the robot navigates the crowd, the robot’s speed should be low while passing by a human in the crowd.
The robots discussed in this section focus on a single task from a set of multiple characteristics that an autonomous walking assistant robot should possess. For example, some robots focus on gait tracking and static obstacle tracking, and some on crowd navigation. For crowd navigation, the mobile robots do not include any physical contact with a human guiding them in the crowd. However, both gait tracking and crowd navigation are essential for an RWA that could be used in hospitals. The robot walking assistant should always support the patient while walking and, at the same time, avoid static obstacles (e.g., sofas, beds, chairs, etc.) and also dynamic obstacles (e.g., crowds) to be able to work safely in hospitals.

This entry is adapted from the peer-reviewed paper 10.3390/app13074576

References

  1. U.S. Bureau of Labor Statistics. Occupational Employment and Wages. Available online: https://www.bls.gov/ooh/healthcare/registered-nurses.htm (accessed on 2 February 2023).
  2. Hall, L.H.; Johnson, J.; Watt, I.; Tsipa, A.; O’Connor, D.B. Healthcare staff wellbeing, burnout, and patient safety: A systematic review. PLoS ONE 2016, 11, e0159015.
  3. Poghosyan, L.; Clarke, S.P.; Finlayson, M.; Aiken, L.H. Nurse burnout and quality of care: Cross-national investigation in six countries. Res. Nurs. Health 2010, 33, 288–298.
  4. Mo, Y.; Deng, L.; Zhang, L.; Lang, Q.; Liao, C.; Wang, N.; Qin, M.; Huang, H. Work stress among Chinese nurses to support Wuhan in fighting against COVID-19 epidemic. J. Nurs. Manag. 2020, 28, 1002–1009.
  5. Kaiser, M.S.; Al Mamun, S.; Mahmud, M.; Tania, M.H. Healthcare robots to combat COVID-19. In COVID-19: Prediction, Decision-Making, and Its Impacts; Springer: Berlin/Heidelberg, Germany, 2021; pp. 83–97.
  6. Javaid, M.; Haleem, A.; Vaish, A.; Vaishya, R.; Iyengar, K.P. Robotics applications in COVID-19: A review. J. Ind. Integr. Manag. 2020, 5, 441–451.
  7. Kyrarini, M.; Lygerakis, F.; Rajavenkatanarayanan, A.; Sevastopoulos, C.; Nambiappan, H.R.; Chaitanya, K.K.; Babu, A.R.; Mathew, J.; Makedon, F. A Survey of Robots in Healthcare. Technologies 2021, 9, 8.
  8. Qassim, H.M.; Wan Hasan, W. A review on upper limb rehabilitation robots. Appl. Sci. 2020, 10, 6976.
  9. Shi, D.; Zhang, W.; Zhang, W.; Ding, X. A review on lower limb rehabilitation exoskeleton robots. Chin. J. Mech. Eng. 2019, 32, 1–11.
  10. Nomura, T.; Kanda, T.; Yamada, S.; Suzuki, T. The effects of assistive walking robots for health care support on older persons: A preliminary field experiment in an elder care facility. Intell. Serv. Robot. 2021, 14, 25–32.
  11. Kerr, E.; McGinnity, T.; Coleman, S.; Shepherd, A. Human vital sign determination using tactile sensing and fuzzy triage system. Expert Syst. Appl. 2021, 175, 114781.
  12. Kim, W.; Balatti, P.; Lamon, E.; Ajoudani, A. MOCA-MAN: A MObile and reconfigurable Collaborative Robot Assistant for conjoined huMAN-robot actions. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020; pp. 10191–10197.
  13. Wu, Y.; Balatti, P.; Lorenzini, M.; Zhao, F.; Kim, W.; Ajoudani, A. A teleoperation interface for loco-manipulation control of mobile collaborative robotic assistant. IEEE Robot. Autom. Lett. 2019, 4, 3593–3600.
  14. Kuzmicheva, O.; Martinez, S.F.; Krebs, U.; Spranger, M.; Moosburner, S.; Wagner, B.; Graser, A. Overground robot based gait rehabilitation system MOPASS - Overview and first results from usability testing. Proc. IEEE Int. Conf. Robot. Autom. 2016, 2016, 3756–3763.
  15. Tan, K.; Koyama, S.; Sakurai, H.; Teranishi, T.; Kanada, Y.; Tanabe, S. Wearable robotic exoskeleton for gait reconstruction in patients with spinal cord injury: A literature review. J. Orthop. Transl. 2021, 28, 55–64.
  16. Matjačić, Z.; Zadravec, M.; Olenšek, A. Feasibility of robot-based perturbed-balance training during treadmill walking in a high-functioning chronic stroke subject: A case-control study. J. Neuro Eng. Rehabil. 2018, 15, 1–15.
  17. Zheng, Q.X.; Ge, L.; Wang, C.C.; Ma, Q.S.; Liao, Y.T.; Huang, P.P.; Wang, G.D.; Xie, Q.L.; Rask, M. Robot-assisted therapy for balance function rehabilitation after stroke: A systematic review and meta-analysis. Int. J. Nurs. Stud. 2019, 95, 7–18.
  18. Chalvatzaki, G.; Koutras, P.; Hadfield, J.; Papageorgiou, X.S.; Tzafestas, C.S.; Maragos, P. Lstm-based network for human gait stability prediction in an intelligent robotic rollator. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 4225–4232.
  19. Van Lam, P.; Fujimoto, Y. A robotic cane for balance maintenance assistance. IEEE Trans. Ind. Inform. 2019, 15, 3998–4009.
  20. Abubakar, S.; Das, S.K.; Robinson, C.; Saadatzi, M.N.; Cynthia Logsdon, M.; Mitchell, H.; Chlebowy, D.; Popa, D.O. ARNA, a Service robot for Nursing Assistance: System Overview and User Acceptability. IEEE Int. Conf. Autom. Sci. Eng. 2020, 2020, 1408–1414.
  21. Ramanathan, M.; Luo, L.; Er, J.K.; Foo, M.J.; Chiam, C.H.; Li, L.; Yau, W.Y.; Ang, W.T. Visual Environment perception for obstacle detection and crossing of lower-limb exoskeletons. IEEE Int. Conf. Intell. Robot. Syst. 2022, 2022, 12267–12274.
  22. Ruiz-Ruiz, F.J.; Giammarino, A.; Lorenzini, M.; Gandarias, J.M.; Gomez-de Gabriel, J.M.; Ajoudani, A. Improving Standing Balance Performance through the Assistance of a Mobile Collaborative Robot. arXiv 2021, arXiv:2109.12038.
  23. Garcia, F.; Pandey, A.K.; Fattal, C. Wait for me! Towards socially assistive walk companions. arXiv 2019, arXiv:1904.08854.
  24. Song, K.T.; Jiang, S.Y.; Wu, S.Y. Safe Guidance for a Walking-Assistant Robot Using Gait Estimation and Obstacle Avoidance. IEEE/ASME Trans. Mechatron. 2017, 22, 2070–2078.
  25. Wenzel, P.; Schön, T.; Leal-Taixé, L.; Cremers, D. Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement Learning. In Proceedings of the IEEE International Conference on Robotics and Automation, Xi’an, China, 30 May–5 June 2021; pp. 14360–14366.
  26. Zhu, K.; Zhang, T. Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Sci. Technol. 2021, 26, 674–691.
  27. Choi, J.; Lee, G.; Lee, C. Reinforcement learning-based dynamic obstacle avoidance and integration of path planning. Intell. Serv. Robot. 2021, 14, 663–677.
  28. Haarnoja, T.; Zhou, A.; Abbeel, P.; Levine, S. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv 2018, arXiv:1801.01290.
  29. Rosmann, C.; Hoffmann, F.; Bertram, T. Timed-Elastic-Bands for time-optimal point-to-point nonlinear model predictive control. In Proceedings of the 2015 European Control Conference, ECC 2015, Linz, Austria, 15–17 July 2015; pp. 3352–3357.
  30. Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. IEEE Robot. Autom. Mag. 1997, 4, 23–33.
  31. Biswas, A.; Wang, A.; Silvera, G.; Steinfeld, A.; Admoni, H. SocNavBench: A Grounded Simulation Testing Framework for Evaluating Social Navigation. ACM Trans. Hum. Robot. Interact. 2020, 12, 1–24.
  32. Lerner, A.; Chrysanthou, Y.; Lischinski, D. Crowds by Example. Comput. Graph. Forum 2007, 26, 655–664.
  33. Pellegrini, S.; Ess, A.; Schindler, K.; Gool, L.V. You’ll never walk alone: Modeling social behavior for multi-target tracking. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 261–268.
  34. Chen, C.; Liu, Y.; Kreiss, S.; Alahi, A. Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning. In Proceedings of the IEEE International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2018; pp. 6015–6022.
  35. Berg, J.V.D.; Guy, S.J.; Lin, M.; Manocha, D. Reciprocal n-Body Collision Avoidance. Springer Tracts Adv. Robot. 2011, 70, 3–19.
  36. Liu, L.; Dugas, D.; Cesari, G.; Siegwart, R.; Dube, R. Robot navigation in crowded environments using deep reinforcement learning. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5671–5677.
  37. Liu, S.; Chang, P.; Liang, W.; Chakraborty, N.; Driggs-Campbell, K. Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 3517–3524.
  38. Jain, A.; Zamir, A.R.; Savarese, S.; Saxena, A. Structural-RNN: Deep Learning on Spatio-Temporal Graphs. arXiv 2016, arXiv:1511.05298.
  39. Vemula, A.; Muelling, K.; Oh, J. Social Attention: Modeling Attention in Human Crowds. arXiv 2018, arXiv:1710.04689.
  40. Joosse, M.; Lohse, M.; Berkel, N.V.; Sardar, A.; Evers, V. Making Appearances: How Robots Should Approach People. ACM Trans. Hum. Robot Interact. 2021, 1–24.
More
This entry is offline, you can click here to edit this entry!
Video Production Service