Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1853 2024-02-06 01:09:18 |
2 format correct Meta information modification 1853 2024-02-07 07:48:12 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Bouazizi, M.; Mora, A.L.; Feghoul, K.; Ohtsuki, T. Human Activity Recognition and Fall Detection. Encyclopedia. Available online: https://encyclopedia.pub/entry/54783 (accessed on 19 May 2024).
Bouazizi M, Mora AL, Feghoul K, Ohtsuki T. Human Activity Recognition and Fall Detection. Encyclopedia. Available at: https://encyclopedia.pub/entry/54783. Accessed May 19, 2024.
Bouazizi, Mondher, Alejandro Lorite Mora, Kevin Feghoul, Tomoaki Ohtsuki. "Human Activity Recognition and Fall Detection" Encyclopedia, https://encyclopedia.pub/entry/54783 (accessed May 19, 2024).
Bouazizi, M., Mora, A.L., Feghoul, K., & Ohtsuki, T. (2024, February 06). Human Activity Recognition and Fall Detection. In Encyclopedia. https://encyclopedia.pub/entry/54783
Bouazizi, Mondher, et al. "Human Activity Recognition and Fall Detection." Encyclopedia. Web. 06 February, 2024.
Human Activity Recognition and Fall Detection
Edit

In health monitoring systems for the elderly, a crucial aspect is unobtrusively and continuously monitoring their activities to detect potentially hazardous incidents such as sudden falls as soon as they occur. However, the effectiveness of current non-contact sensor-based activity detection systems is limited by obstacles present in the environment. To overcome this limitation, a straightforward yet highly efficient approach involves utilizing multiple sensors that collaborate seamlessly.

activity detection human activity recognition fall detection healthcare 2D Lidar

1. Introduction

The aging demographic landscape presents a significant challenge for healthcare systems worldwide. As life expectancy increases, so does the prevalence of age-related health issues, particularly those associated with falls and mobility impairments. According to the World Health Organization (WHO) [1], falls are the second leading cause of unintentional injury deaths globally, accounting for over 650,000 fatalities each year. Furthermore, non-fatal falls often result in severe injuries and reduced quality of life, contributing further to the immense financial burden on healthcare systems.
With that in mind, the field of human activity recognition (HAR) and fall detection has witnessed remarkable advancements, driven primarily by the rapid development and integration of sensor technologies. These advancements have the potential to improve the healthcare landscape, assist the elderly and individuals with mobility constraints, and enhance the overall quality of their life. While cost-effective solutions, like those utilizing cameras [2] and audio data collection [3], offer practicality, they simultaneously raise potential privacy concerns. Sensors, thus, have emerged as a key tool in this endeavor, providing the means to monitor, understand, and respond to human movements and behaviors within indoor environments. These sensors are at the forefront of research and technological innovation as they address the pressing need for improved healthcare, patient safety, and assisted living for an aging global population.
For several decades now, there has been a remarkable upswing in research aimed at closely monitoring the physical activity of elderly individuals and those with mobility constraints using sensors. Numerous approaches have emerged as potential solutions to tackle this challenge while accounting for a wide variety of factors and constraints, including speed; performance; ease of deployment; scalability; and, most importantly of all, the preservation of the privacy of the individuals undergoing observation.
Sensors used for HAR can broadly be divided into two distinct categories.
  • The first category comprises sensors that need to be physically attached to the individual being monitored, either directly to their bodies or as part of their clothing. These sensors commonly leverage accelerometers and gyroscopes as well as pressure sensors to detect movements across various dimensions and subsequently identify different activities. Often integrated into wearable devices like smartwatches [4] and smart clothing items [5], these attached sensors provide invaluable data. It is important, however, to acknowledge that they can be somewhat obtrusive and may influence the comfort and independence of the elderly individuals under surveillance.
  • The second category comprises sensors that do not need to be carried by the monitored individuals or attached to their bodies/clothes. Instead, they are strategically positioned within the environment where these individuals reside, such as their room or a care facility. These sensors are designed to capture diverse types of data, including heat distribution [6][7][8][9], and fluctuations and perturbations in WiFi signal propagation [10][11][12]. They offer a less invasive method of monitoring physical activity and cause minimal disruption to the daily routines and comfort of the elderly individuals. As a result, this category of sensors has gained substantial attention among researchers, aligning with the overarching goal of preserving the autonomy and well-being of the elderly.
In essence, the pursuit of effective physical activity monitoring for elderly individuals and those with mobility challenges is a multifaceted endeavor that hinges on an equilibrium between technology, privacy, cost, and comfort. Research in this field continues to evolve, with a dedication to addressing these core considerations while striving to enhance the quality of life and overall well-being of the individuals being monitored.
In a relevant context, Lidar technology (hereafter referred to as Lidar), despite its historical association with specialized applications such as meteorology; agriculture; and, more recently, autonomous driving [13][14], has traditionally been constrained by its high cost. However, there has been a notable shift over the years, marked by a significant reduction in the cost of this technology. This reduction has facilitated the expansion of Lidar’s utility across a broader spectrum of domains. In this evolving landscape, Lidar systems have garnered increased attention, particularly in indoor environments. One of their prominent applications is the detection of dynamic objects indoors. This encompasses the identification, recognition, and tracking of moving entities, whether they are humans, pets, or robots. While the field of Detection And Tracking of Moving Objects (DATMO) is not new and has been subject to extensive research, the utilization of Lidar technology for this purpose is a relatively recent development [15]. Several techniques for object localization and tracking are being explored in this context. More importantly, several research efforts have emerged, leveraging 2D Lidar-related techniques in the medical field, to enhance, for instance, gait tracking [16] and to perform HAR [17][18][19]. These developments represent a significant leap forward in the application of this technology to enhance healthcare-related activity detection.

2. Lidar Technology

With the decrease its cost over the years, Lidar technology has become affordable for regular consumers, and its usage has expanded into various domains. Nonetheless, the mastery of this technology came with a decrease in transmission power, making it safe to use for humans and living things. Within this evolving landscape, Lidar has garnered increased attention in indoor environments. One prominent area where Lidar has shown promise is in the detection of dynamic objects within indoor spaces. This term encompasses the identification, recognition, and tracking of moving entities, whether they be humans, pets, or robots. Several localization techniques have been developed for this purpose. Some employ probabilistic methods aimed at clustering data points and determining the changing positions of these clusters over time, as seen in works like [20]. Similarly, for dynamic environment mapping, works such as [21] propose probabilistic techniques for identifying and isolating dynamic objects to represent them in 3D space. Nonetheless, one of the most widely adopted techniques is frame-to-frame scan matching, initially pioneered by Besl et al. [22] and further refined in subsequent works like [23][24]. Frame-to-frame scan matching involves determining the relative differences between two consecutive frames, with overlapping mappings, providing a means to track dynamic objects. These methods are commonly applied in the field of robotics to enable autonomous robots to navigate specific environments. However, these techniques of localization and identification have seen applications in other fields such as the medical field for tasks such as human monitoring and human activity recognition (referred to as activity detection, as well).

3. Activity Detection

Monitoring the activities of elderly individuals and those with disabilities and/or limited mobility has become a prominent subject of study, attracting the attention of researchers in both academia and industry. Various technologies and methods aiming to perform this task (i.e., activity monitoring) have been introduced in the literature, taking into account factors such as cost-effectiveness, performance, ease of implementation, scalability, and privacy preservation.
However, cost-effective approaches employing technologies like cameras [2] or audio data [3] have stirred privacy apprehensions. Consequently, there has been a growing emphasis on alternative methods that are less invasive in terms of privacy. These alternatives predominantly revolve around the use of sensors. As previously stated, sensors can be broadly classified into two categories: attached sensors, which are integrated into devices worn by individuals (e.g., those with accelerometers and gyroscopes as seen in the work of Zhuang et al. [4]), and non-attached sensors, which capture data from the surrounding environment (e.g., monitoring heat distribution and Wi-Fi signals such as in [6][7][8][9][10][11][12]).
In the former category, approaches such as that of Ha and Choi [25] and Mishkhal [26] were proposed, in which the recognition of the activity relies on accelerometers attached to the body. Approaches such as that of Webber and Rojas [27] proposed using smartphones attached to different parts of the body instead. Similarly, Barna et al. [28] included other sensed data such as humidity and temperature, in addition to gyroscope data, to recognize the human activities.
However, the preference for approaches centered on non-attached sensors stems from their ability to reduce the burden on individuals being monitored and to address privacy concerns more effectively. While each sensor type has its own set of advantages and limitations, the paramount considerations revolve around privacy preservation and user-friendliness. As a result, the research community has gravitated towards non-attached sensors as the focus of their endeavors.
Infrared (IR) array sensors, for instance, are sensors that can capture the heat emitted by the human body and map it into a low-resolution image-like format for processing. Approaches for human recognition and activity detection using IR array sensors include those of Yang et al. [6], Burns et al. [29], and Bouazizi et al. [9][30]. While IR infrared sensors achieve impressive results (i.e., over a 92% F1-score in the case of [6]) while not compromising on aspects such as privacy and cost, they have a few limitations of their own. On the one hand, they are very sensitive to noise and the presence of any heat-emitting sources such as electronic devices (computers, stoves, etc.). On the other hand, the presence of obstacles limits their potential as they block the IR rays from reaching the sensors. Finally, for good detection, these sensors need to be placed strategically to minimize their deployment cost as they have very limited coverage, and multiple sensors need to be placed in a single room to guarantee full coverage.
Frequency-modulated continuous-wave (FMCW) radars have attracted similar attention given their ability to identify activities, in particular falling activities, while allowing one to extract vital signals such heartbeat and respiration rates. Shahzad et al. [31] used such radars to recognize 7 different activities in an unconstrained environment, with an accuracy reaching 91%. Saeed et al. [32] proposed a similar approach in which they extracted spectrograms of short recordings of FMCW radars (i.e., between 5 and 10 s), which they processed via a Residual Neural Network (ResNet) to classify 6 activities with 85% accuracy in unseen environments and 96% in seen ones. Shah et al. [33] used 2 machine learning algorithms including Support Vector Machine (SVM), k-nearest neighbour (kNN) for classification, and AlexNet [34] for feature extraction. They reached 81% accuracy in activity recognition for 4 different scenarios.
Lidar technology, the focus of this work, has also been explored in the literature as a candidate for a non-intrusive robust method for activity detection-related tasks such as fall detection. Several studies have explored the application of 2D Lidar techniques for purposes such as gait tracking and human activity recognition [16][17][18][19].
Building upon the research conducted by Piezzo et al. [35] and Li et al. [36], Duong et al., in their work [16], employed 2D Lidar technology for tracking the gait of individuals who use walkers as part of gait rehabilitation. More recently, researchers like Luo et al. [17] and Bouazizi et al. [18][19] have harnessed Deep Learning (DL) algorithms to achieve activity detection using 2D Lidar technology.
In addition to 2D Lidars, 3D Lidars have also been recently explored more intensively. Works such that of Roche et al. [37] and Benedek et al. [38] have shown significant results in HAR and Gait analysis tasks, reaching an accuracy over 90% for the first and over 80% for the second.

References

  1. Andersen, M.; Bhaumik, S.; Brown, J.; Elkington, J.; Ivers, R.; Keay, L.; Lim, M.L.; Lukaszyk, C.; Ma, T.; Meddings, D.; et al. Step Safely: Strategies for Preventing and Managing Falls Across the Life-Course; World Health Organization: Geneva, Switzerland, 2021.
  2. Berger, J.; Lu, S. A Multi-camera System for Human Detection and Activity Recognition. Procedia CIRP 2022, 112, 191–196.
  3. Stork, J.A.; Spinello, L.; Silva, J.; Arras, K.O. Audio-based Human Activity Recognition Using Non-Markovian Ensemble Voting. In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Paris, France, 9–13 September 2012; pp. 509–514.
  4. Zhuang, Z.; Xue, Y. Sport-related human activity detection and recognition using a smartwatch. Sensors 2019, 19, 5001.
  5. Zhang, Z.; He, T.; Zhu, M.; Sun, Z.; Shi, Q.; Zhu, J.; Dong, B.; Yuce, M.R.; Lee, C. Deep learning-enabled triboelectric smart socks for IoT-based gait analysis and VR applications. NPJ Flex. Electron. 2020, 4, 29.
  6. Yang, Y.; Yang, H.; Liu, Z.; Yuan, Y.; Guan, X. Fall detection system based on infrared array sensor and multi-dimensional feature fusion. Measurement 2022, 192, 110870.
  7. Ben-Sadoun, G.; Michel, E.; Annweiler, C.; Sacco, G. Human fall detection using passive infrared sensors with low resolution: A systematic review. Clin. Interv. Aging 2022, 17, 35–53.
  8. Ogawa, Y.; Naito, K. Fall detection scheme based on temperature distribution with IR array sensor. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 4–6 January 2020; pp. 1–5.
  9. Bouazizi, M.; Ye, C.; Ohtsuki, T. Low-Resolution Infrared Array Sensor for Counting and Localizing People Indoors: When Low End Technology Meets Cutting Edge Deep Learning Techniques. Information 2022, 13, 132.
  10. Mattela, G.; Tripathi, M.; Pal, C. A Novel Approach in WiFi CSI-Based Fall Detection. SN Comput. Sci. 2022, 3, 214.
  11. Chen, S.; Yang, W.; Xu, Y.; Geng, Y.; Xin, B.; Huang, L. AFall: Wi-Fi-based device-free fall detection system using spatial angle of arrival. IEEE Trans. Mob. Comput. 2022, 22, 4471–4484.
  12. Hu, Y.; Zhang, F.; Wu, C.; Wang, B.; Liu, K.R. DeFall: Environment-independent passive fall detection using WiFi. IEEE Internet Things J. 2021, 9, 8515–8530.
  13. Banta, R.M.; Pichugina, Y.L.; Kelley, N.D.; Hardesty, R.M.; Brewer, W.A. Wind energy meteorology: Insight into wind properties in the turbine-rotor layer of the atmosphere from high-resolution Doppler Lidar. Bull. Am. Meteorol. Soc. 2013, 94, 883–902.
  14. Gao, H.; Cheng, B.; Wang, J.; Li, K.; Zhao, J.; Li, D. Object classification using CNN-based fusion of vision and Lidar in autonomous vehicle environment. IEEE Trans. Ind. Inform. 2018, 14, 4224–4231.
  15. Llamazares, Á.; Molinos, E.J.; Ocaña, M. Detection and tracking of moving obstacles (DATMO): A review. Robotica 2020, 38, 761–774.
  16. Duong, H.T.; Suh, Y.S. Human gait tracking for normal people and walker users using a 2D Lidar. IEEE Sensors J. 2020, 20, 6191–6199.
  17. Luo, F.; Poslad, S.; Bodanese, E. Temporal convolutional networks for multiperson activity recognition using a 2-d Lidar. IEEE Internet Things J. 2020, 7, 7432–7442.
  18. Bouazizi, M.; Ye, C.; Ohtsuki, T. 2D Lidar-Based Approach for Activity Identification and Fall Detection. IEEE Internet Things J. 2021, 9, 10872–10890.
  19. Bouazizi, M.; Ye, C.; Ohtsuki, T. Activity Detection using 2D Lidar for Healthcare and Monitoring. In Proceedings of the IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021; pp. 1–6.
  20. Tipaldi, G.D.; Ramos, F. Motion clustering and estimation with conditional random fields. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St Louis, MO, USA, 11–15 October 2009; pp. 872–877.
  21. Hahnel, D.; Triebel, R.; Burgard, W.; Thrun, S. Map building with mobile robots in dynamic environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 14–19 September 2003; Volume 2, pp. 1557–1563.
  22. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms Data Structures, Boston, MA, USA, 14–15 November 1992; Volume 1611, pp. 586–606.
  23. Dubé, R.; Dugas, D.; Stumm, E.; Nieto, J.; Siegwart, R.; Cadena, C. Segmatch: Segment based place recognition in 3d point clouds. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5266–5272.
  24. Konolige, K.; Agrawal, M. Frame-Frame Matching for Realtime Consistent Visual Mapping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Rome, Italy, 10–14 April 2007; pp. 2803–2810.
  25. Ha, S.; Choi, S. Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 381–388.
  26. Mishkhal, I.A. Human Activity Recognition Based on Accelerometer and Gyroscope Sensors. Master’s Thesis, Ball State University, Muncie, IN, USA, 2017.
  27. Webber, M.; Rojas, R.F. Human Activity Recognition With Accelerometer and Gyroscope: A Data Fusion Approach. IEEE Sensors J. 2021, 21, 16979–16989.
  28. Barna, A.; Masum, A.K.M.; Hossain, M.E.; Bahadur, E.H.; Alam, M.S. A study on human activity recognition using gyroscope, accelerometer, temperature and humidity data. In Proceedings of the International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’sBazar, Bangladesh, 7–9 February 2019; pp. 1–6.
  29. Burns, M.; Cruciani, F.; Morrow, P.; Nugent, C.; McClean, S. Using convolutional neural networks with multiple thermal sensors for unobtrusive pose recognition. Sensors 2020, 20, 6932.
  30. Bouazizi, M.; Ohtsuki, T. An Infrared Array Sensor-Based Method for Localizing and Counting People for Health Care and Monitoring. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 4151–4155.
  31. Ahmed, S.; Park, J.; Cho, S.H. FMCW Radar Sensor Based Human Activity Recognition using Deep Learning. In Proceedings of the International Conference on Electronics, Information, and Communication (ICEIC), Jeju, Republic of Korea, 6–9 February 2022; pp. 1–5.
  32. Saeed, U.; Shah, S.Y.; Shah, S.A.; Ahmad, J.; Alotaibi, A.A.; Althobaiti, T.; Ramzan, N.; Alomainy, A.; Abbasi, Q.H. Discrete human activity recognition and fall detection by combining FMCW RADAR data of heterogeneous environments for independent assistive living. Electronics 2021, 10, 2237.
  33. Shah, S.A.; Fioranelli, F. Human Activity Recognition: Preliminary Results for Dataset Portability using FMCW Radar. In Proceedings of the International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; pp. 1–4.
  34. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS 2012), Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25.
  35. Piezzo, C.; Leme, B.; Hirokawa, M.; Suzuki, K. Gait measurement by a mobile humanoid robot as a walking trainer. In Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 1084–1089.
  36. Li, D.; Li, L.; Li, Y.; Yang, F.; Zuo, X. A multi-type features method for leg detection in 2-D laser range data. IEEE Sens. J. 2017, 18, 1675–1684.
  37. Roche, J.; De-Silva, V.; Hook, J.; Moencks, M.; Kondoz, A. A multimodal data processing system for Lidar-based human activity recognition. IEEE Trans. Cybern. 2021, 52, 10027–10040.
  38. Benedek, C.; Gálai, B.; Nagy, B.; Jankó, Z. Lidar-based gait analysis and activity recognition in a 4d surveillance system. IEEE Trans. Circuits Syst. Video Technol. 2016, 28, 101–113.
More
Information
Subjects: Others
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 78
Revisions: 2 times (View History)
Update Date: 07 Feb 2024
1000/1000