Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1671 2024-01-11 11:49:02 |
2 update the conclusion part + 276 word(s) 1947 2024-01-11 12:03:06 | |
3 update references and layout -1 word(s) 1946 2024-01-12 02:42:53 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Wazwaz, A.; Amin, K.; Semary, N.; Ghanem, T. Dynamic Distributed Intelligence Architecture for Human Activity Recognition. Encyclopedia. Available online: https://encyclopedia.pub/entry/53742 (accessed on 27 April 2024).
Wazwaz A, Amin K, Semary N, Ghanem T. Dynamic Distributed Intelligence Architecture for Human Activity Recognition. Encyclopedia. Available at: https://encyclopedia.pub/entry/53742. Accessed April 27, 2024.
Wazwaz, Ayman, Khalid Amin, Noura Semary, Tamer Ghanem. "Dynamic Distributed Intelligence Architecture for Human Activity Recognition" Encyclopedia, https://encyclopedia.pub/entry/53742 (accessed April 27, 2024).
Wazwaz, A., Amin, K., Semary, N., & Ghanem, T. (2024, January 11). Dynamic Distributed Intelligence Architecture for Human Activity Recognition. In Encyclopedia. https://encyclopedia.pub/entry/53742
Wazwaz, Ayman, et al. "Dynamic Distributed Intelligence Architecture for Human Activity Recognition." Encyclopedia. Web. 11 January, 2024.
Dynamic Distributed Intelligence Architecture for Human Activity Recognition
Edit

A wide range of applications, including sports and healthcare, use human activity recognition (HAR). The Internet of Things (IoT), using cloud systems, offers enormous resources but produces high delays and huge amounts of traffic. Researchers propose a distributed intelligence and dynamic HAR architecture using smart IoT devices, edge devices, and cloud computing. These systems were used to train models, store results, and process real-time predictions. Wearable sensors and smartphones were deployed on the human body to detect activities from three positions; accelerometer and gyroscope parameters were utilized to recognize activities. A dynamic selection of models was used, depending on the availability of the data and the mobility of the users.

Internet of Things (IoT) wearable sensors human activity recognition edge computing cloud computing dynamic feature selection

1. Introduction

The Internet of Things (IoT) has revolutionized the way we interact with our environment, enabling the connection of things to the internet [1][2]. One application of the IoT is human activity recognition (HAR), where smart devices can monitor and recognize human activities for various purposes. HAR is essential in a number of industries, including the sport [3][4], healthcare [5][6][7], and smart environment industries [8][9][10][11]; information about human activities has been collected using smartphones and wearable sensor technologies [12][13][14][15][16].
Machine learning (ML) plays a significant role in HAR systems [11]. ML automatically identifies and classifies different activities performed by individuals based on sensor data or other input sources. Feature extraction has been used to extract relevant features from raw sensor data, such as accelerometer or gyroscope readings, to represent different activities. ML models can be trained on labeled data to recognize patterns, make predictions, and classify activities based on the extracted features. ML models can be deployed to perform real-time activity recognition, allowing for immediate feedback or intervention. Feature fusion can be used by combining information from different sources to improve the accuracy and robustness of activity recognition systems [17][18][19].
Cloud computing offers virtually unlimited resources for data storage and processing, making it an attractive option for HAR applications. In the context of HAR, using wearable sensors, cloud computing can be utilized. The integration of edge and cloud computing offers promising solutions to address the challenges associated with processing wearable sensor data. Edge computing provides low-latency and privacy-preserving capabilities, while cloud computing offers scalability and storage advantages [20][21][22].
To combine the benefits of both edge and cloud computing, researchers have proposed hybrid architectures that combine the strengths of both paradigms [21][22]. These architectures aim to achieve a balance between real-time processing at the edge and the scalability and storage capabilities of the cloud. Authors of different publications demonstrated improved recognition accuracy and reduced response times compared to a solely cloud-based approach [23][24][25].
Distributed intelligence methods that make use of cloud and edge computing have shown promise in addressing these issues [26][27]. The concept of distributed intelligence has emerged, leveraging the power of multiple interconnected devices to perform complex tasks. This research explored the application of distributed intelligence in the IoT for human activity recognition, specifically focusing on the use of Raspberry Pi as a platform. Healthcare systems using wearable sensors are an emerging field that aims to understand and classify human activities based on data collected from sensors integrated into wearable devices. These sensors can include accelerometers, gyroscopes, magnetometers, heart rate monitors, blood pressure monitors, and other medical sensors [8][12][27]. Here, only accelerometers and gyroscopes were used.
Healthcare applications have been improved by combining wearable and mobile sensors with IoT infrastructure, and medical device usability has been improved by combining mobile applications with IoT technology. The IoT has been expected to have a particularly significant impact on healthcare, with the potential to improve people’s quality of life in general. The authors of [28] presented a “Stress-Track” system using machine learning. Through measurements of body temperature, perspiration, and movement rate during exercise, their device was intended to monitor an individual’s stress levels. With a high accuracy percentage of 99.5%, this suggested model demonstrated its potential influence on stress reduction and better health.
Wearable sensors and computer vision were employed to recognize activities. Wearable sensors are made to be worn by individuals, allowing for activity recognition in indoor and outdoor environments. Wearable sensors can also offer portability and continuous data collection, enabling long-term monitoring and analysis. In order to provide more contextual information for activity recognition, computer vision algorithms can capture a wider perspective of the environment, including objects, scenes, and interactions with other people. Sufficient lighting and clear perspectives are required for precise computer vision-based activity recognition.
HAR has been implemented with different types of motion sensors; one of them is the MPU6050 inertial measurement unit (IMU) sensor, which included a tri-axial gyroscope and a tri-axial accelerometer [16][29]. This small sensor module, with a size of 21.2 mm in length, 16.4 mm in width, and 3.3 mm in height, along with a weight of 2.1 g, is a cheap and popular choice for capturing motion data under different applications, including HAR systems [7][16], sports [30][31], and earthquake detection [29]. MPU6050 can be used for HAR, and achieving high accuracy requires careful consideration of sensor placement, feature extraction, classification algorithms, training data quality, and environmental factors [32].
Also, smartphones have become increasingly popular as a platform for HAR due to their availability, built-in sensors, computational power, and connectivity capabilities. Smartphone sensors have been widely used in machine learning; they have been used to recognize different categories of daily life activities. In recent research, several papers used smartphone sensors and focused on enhancing prediction accuracy and optimizing algorithms to speed up processing [14][15][32].
The orientation of these sensors should be considered according to the place of their installation, the nature of movements, and the dataset being used for training [17][33][34]. An MPU6050 module connected to an ESP32 microcontroller was vertically installed on the shin; another module connected to Raspberry Pi version 3 was installed horizontally on the waist, and a smartphone was vertically placed inside the pocket on the thigh. Under the vertical installation, the +Y component direction of the accelerometer is upward, and in the horizontal installation, the +X component direction is upward. The directions of the MPU6050 module are explained in Figure 1a,b, and the smartphone direction is explained in Figure 1c [32][35].
Figure 1. Accelerometer and gyroscope directions in the MPU6050 module and smartphones, (a,b) shows MPU6050 directions, and (c) show the smartphone directions.

2. Human Activity Recognition Using Wearable Sensors

In recent years, researchers have used public and private datasets that have been used for HAR using wearable sensors and feature fusion to recognize various movements. Two smartphones were used by the authors of [8] with the WISDM public dataset. Applications in healthcare have used this dataset, which contains three-dimensional inertia signals of thirteen time-stamped human activities, including walking, writing, smoking, and other activities. A HAR system was used based on effective hand-crafted features and random forest as a classifier. These authors conducted sensitivity analyses of the applied model’s parameters, and the accuracy of their model reached 98.7% on average using two devices on the hand’s wrist and in the pocket.
The authors of [16] used a deep learning algorithm and employed three parallel convolutional neural networks for local feature extraction to establish feature fusion models of varying kernel sizes to increase the accuracy of HAR. Two datasets were used: the UCI dataset and a self-recorded dataset comprising 21 participants wearing devices on their waists and performing six activities in their laboratory. The accuracy of the activities in the UCI dataset and in the self-recorded dataset was 97.49% and 96.27%, respectively.
In the study published by the authors of [35], the authors used waist sensors and two graphene/rubber sensors for the knees to detect falls in elderly people. They used one MPU6050 sensor located at the waist to monitor the attitude of the body. The rubber sensors were used to monitor the movements of their legs by monitoring their tilt angles in real time. They recorded four activities of daily living and six fall postures. Four basic fall-down postures can be identified with the MPU6050 sensor integrated with rubber sensors. The accuracy results for the activities of daily living recognition were 93.5%, and for fall posture identification, they were 90%.
It was proposed to use a smart e-health framework to monitor the health of the elderly and disabled people in the study published by the authors of [36]. The authors generated notifications and performed analyses using edge computing. Three MPU9250 sensors were positioned on the body, on the left ankle, right wrist, and chest, using the MHEALTH dataset. The MPU9250 sensor integrates a magnetometer, a gyroscope, and a 3-axis accelerometer. A random forest machine learning model was used for inertial sensors in HAR. Two levels of analyses were considered; the first level was carried out using scalar sensors embedded in wearable devices, and cameras were only used as the second level if inconsistencies were identified at the first level. A video-based HAR and fall detection module achieved an accuracy of 86.97% on the DML Smart Actions dataset. The authors deployed the proposed HAR model with inertial sensors under a controlled experimental environment and achieved results with an accuracy of 96%.
A smart system for the quality of life of elderly people was proposed by the authors of [37]. These authors proposed an IoT system that uses large amounts of data, cloud computing, low-power wireless sensing networks, and smart devices to identify falls. Their technology gathered data from elderly people’s movements in real time by integrating an accelerometer into a wearable device. The signals from their sensor were processed and analyzed using a machine learning model on a gateway for fall detection. They employed sensor positioning and multiple channeling information changes in the training set, using the MobiAct public dataset. Their system achieved 95.87% accuracy, and their edge system was able to detect falls in real time.
The majority of these studies focused on improving algorithms, employing different datasets, or adding new features to increase accuracy. Occasionally, specialized hardware was added to improve efficiency and assist with classification. Wearable technology employs sensors installed on humans to gather data from their sensors. Comfort and mobility should be taken into account when gathering data from sensors positioned at various body locations.
This research proposes an architecture to monitor a group of people and recognize their behavior. This system was distributed over various devices, including sensor devices, smart IoT devices, edge devices, and cloud computing. The architecture used different machine learning models with different numbers of features to be able to handle scenarios where not all the sensors are available. The smart IoT device used a simple version of the Raspberry Pi microcomputer that was configured to run predictions locally. The Raspberry Pi 4 IoT edge could serve IoT devices and then run predictions and aggregate results in the cloud. The cloud process requests achieved an accuracy of 99.23% in training, while the edge and smart end devices achieved 99.19% accuracy with smaller datasets. The accuracy under real-time scenarios was measured and achieved 93.5% when all the features were available. The smart end device could process every request in 6.38 milliseconds on average, and the edge could process faster with 1.62 milliseconds on average and serve a group of users with a sufficient number of predictions per user, and the system is capable of serving more people using more edges or smart end devices. The architecture used distributed intelligence to be dynamic, accurate, and support mobility.
In addition to its great performance, this proposal provided the following features:
  • Integration and cooperation between the devices were efficient.
  • The achieved accuracy was 99.19% in training and 93.5% in real time.
  • The prediction time was efficient using the smart end and IoT edge devices.
  • Dynamic selection worked efficiently in the case of connectivity with the edges.
  • Dynamic selection of models worked efficiently in the case of feature availability.
  • The architecture is scalable and serves more than 30 users per edge.

References

  1. Esposito, M.; Belli, A.; Palma, L.; Pierleoni, P. Design and Implementation of a Framework for Smart Home Automation Based on Cellular IoT, MQTT, and Serverless Functions. Sensors 2023, 23, 4459.
  2. Franco, T.; Sestrem, L.; Henriques, P.R.; Alves, P.; Varanda Pereira, M.J.; Brandão, D.; Leitão, P.; Silva, A. Motion Sensors for Knee Angle Recognition in Muscle Rehabilitation Solutions. Sensors 2022, 22, 7605.
  3. Zhuang, Z.; Xue, Y. Sport-Related Human Activity Detection and Recognition Using a Smartwatch. Sensors 2019, 19, 5001.
  4. Zhou, E.; Zhang, H. Human action recognition toward massive-scale sport sceneries based on deep multi-model feature fusion. Signal Process. Image Commun. 2020, 84, 115802.
  5. Zhang, S.; Li, Y.; Zhang, S.; Shahabi, F.; Xia, S.; Deng, Y.; Alshurafa, N. Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances. Sensors 2022, 22, 1476.
  6. Bianchi, V.; Bassoli, M.; Lombardo, G.; Fornacciari, P.; Mordonini, M.; De Munari, I. IoT Wearable Sensor and Deep Learning: An Integrated Approach for Personalized Human Activity Recognition in a Smart Home Environment. IEEE Internet Things J. 2019, 6, 8553–8562.
  7. Abdel-Basset, M.; Hawash, H.; Chakrabortty, R.K.; Ryan, M.; Elhoseny, M.; Song, H. ST-DeepHAR: Deep Learning Model for Human Activity Recognition in IoHT Applications. IEEE Internet Things J. 2021, 8, 4969–4979.
  8. Issa, M.E.; Helmi, A.M.; Al-Qaness, M.A.A.; Dahou, A.; Elaziz, M.A.; Damaševičius, R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare 2022, 10, 1084.
  9. Qu, Y.; Tang, Y.; Yang, X.; Wen, Y.; Zhang, W. Context-aware mutual learning for semi-supervised human activity recognition using wearable sensors. Expert Syst. Appl. 2023, 219, 119679.
  10. Gulati, N.; Kaur, P.D. An argumentation enabled decision making approach for Fall Activity Recognition in Social IoT based Ambient Assisted Living systems. Future Gener. Comput. Syst. 2021, 122, 82–97.
  11. Nasir, M.; Muhammad, K.; Ullah, A.; Ahmad, J.; Baik, S.W.; Sajjad, M. Enabling automation and edge intelligence over resource constraint IoT devices for smart home. Neurocomputing 2022, 491, 494–506.
  12. Hong, Z.; Hong, M.; Wang, N.; Ma, Y.; Zhou, X.; Wang, W. A wearable-based posture recognition system with AI-assisted approach for healthcare IoT. Future Gener. Comput. Syst. 2022, 127, 286–296.
  13. Khan, I.U.; Afzal, S.; Lee, J.W. Human Activity Recognition via Hybrid Deep Learning Based Model. Sensors 2022, 22, 323.
  14. Tanigaki, K.; Teoh, T.C.; Yoshimura, N.; Maekawa, T.; Hara, T. Predicting Performance Improvement of Human Activity Recognition Model by Additional Data Collection. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2022, 6, 142.
  15. Khalid, A.M.; Khafaga, D.S.; Aldakheel, E.A.; Hosny, K.M. Human Activity Recognition Using Hybrid Coronavirus Disease Optimization Algorithm for Internet of Medical Things. Sensors 2023, 23, 5862.
  16. Yen, C.-T.; Liao, J.-X.; Huang, Y.-K. Feature Fusion of a Deep-Learning Algorithm into Wearable Sensor Devices for Human Activity Recognition. Sensors 2021, 21, 8294.
  17. Qiu, S.; Zhao, H.; Jiang, N.; Wang, Z.; Liu, L.; An, Y.; Zhao, H.; Miao, X.; Liu, R.; Fortino, G. Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Inf. Fusion 2022, 80, 241–265.
  18. Islam, M.M.; Nooruddin, S.; Karray, F.; Muhammad, G. Multi-level feature fusion for multimodal human activity recognition in Internet of Healthcare Things. Inf. Fusion 2023, 94, 17–31.
  19. Chen, J.; Sun, Y.; Sun, S. Improving Human Activity Recognition Performance by Data Fusion and Feature Engineering. Sensors 2021, 21, 692.
  20. Tuli, S.; Basumatary, N.; Gill, S.S.; Kahani, M.; Arya, R.C.; Wander, G.S.; Buyya, R. HealthFog: An ensemble deep learning based Smart Healthcare System for Automatic Diagnosis of Heart Diseases in integrated IoT and fog computing environments. Future Gener. Comput. Syst. 2020, 104, 187–200.
  21. Aazam, M.; Zeadally, S.; Flushing, E.F. Task offloading in edge computing for machine learning-based smart healthcare. Comput. Netw. 2021, 191, 108019.
  22. Ghosh, A.M.; Grolinger, K. Edge-Cloud Computing for Internet of Things Data Analytics: Embedding Intelligence in the Edge With Deep Learning. IEEE Trans. Ind. Inform. 2021, 17, 2191–2200.
  23. Agarwal, P.; Alam, M. A Lightweight Deep Learning Model for Human Activity Recognition on Edge Devices. Procedia Comput. Sci. 2020, 167, 2364–2373.
  24. Bourechak, A.; Zedadra, O.; Kouahla, M.N.; Guerrieri, A.; Seridi, H.; Fortino, G. At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives. Sensors 2023, 23, 1639.
  25. Shaik, T.; Tao, X.; Higgins, N.; Li, L.; Gururajan, R.; Zhou, X.; Acharya, U.R. Remote patient monitoring using artificial intelligence: Current state, applications, and challenges. WIREs Data Min. Knowl. Discov. 2023, 13, e1485.
  26. Mwase, C.; Jin, Y.; Westerlund, T.; Tenhunen, H.; Zou, Z. Communication-efficient distributed AI strategies for the IoT edge. Future Gener. Comput. Syst. 2022, 131, 292–308.
  27. Tang, Y.; Zhang, L.; Wu, H.; He, J.; Song, A. Dual-Branch Interactive Networks on Multichannel Time Series for Human Activity Recognition. IEEE J. Biomed. Health Inform. 2022, 26, 5223–5234.
  28. Al-Atawi, A.A.; Alyahyan, S.; Alatawi, M.N.; Sadad, T.; Manzoor, T.; Farooq-i-Azam, M.; Khan, Z.H. Stress Monitoring Using Machine Learning, IoT and Wearable Sensors. Sensors 2023, 23, 8875.
  29. Duggal, R.; Gupta, N.; Pandya, A.; Mahajan, P.; Sharma, K.; Angra, P. Building structural analysis based Internet of Things network assisted earthquake detection. Internet Things 2022, 19, 100561.
  30. Mekruksavanich, S.; Jitpattanakul, A. Sport-related activity recognition from wearable sensors using bidirectional gru network. Intell. Autom. Soft Comput. 2022, 34, 1907–1925.
  31. Tarafdar, P.; Bose, I. Recognition of human activities for wellness management using a smartphone and a smartwatch: A boosting approach. Decis. Support Syst. 2021, 140, 113426.
  32. Liu, Y.; Li, Z.; Zheng, S.; Cai, P.; Zou, X. An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System. Micromachines 2022, 13, 602.
  33. Dong, D.; Ma, C.; Wang, M.; Vu, H.T.; Vanderborght, B.; Sun, Y. A low-cost framework for the recognition of human motion gait phases and patterns based on multi-source perception fusion. Eng. Appl. Artif. Intell. 2023, 120, 105886.
  34. Krupitzer, C.; Sztyler, T.; Edinger, J.; Breitbach, M.; Stuckenschmidt, H.; Becker, C. Beyond position-awareness—Extending a self-adaptive fall detection system. Pervasive Mob. Comput. 2019, 58, 101026.
  35. Xu, T.; Sun, W.; Lu, S.; Ma, K.; Wang, X. The real-time elderly fall posture identifying scheme with wearable sensors. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719885616.
  36. Yazici, A.; Zhumabekova, D.; Nurakhmetova, A.; Yergaliyev, Z.; Yatbaz, H.Y.; Makisheva, Z.; Lewis, M.; Ever, E. A smart e-health framework for monitoring the health of the elderly and disabled. Internet Things 2023, 24, 100971.
  37. Kulurkar, P.; Dixit, C.K.; Bharathi, V.C.; Monikavishnuvarthini, A.; Dhakne, A.; Preethi, P. AI based elderly fall prediction system using wearable sensors: A smart home-care technology with IOT. Meas. Sens. 2023, 25, 100614.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 97
Revisions: 3 times (View History)
Update Date: 12 Jan 2024
1000/1000