Human Activities Recognition Based on Wrist-Worn Wearable Devices: Comparison
Please note this is a comparison between Version 2 by Alexandru Iulian Alexan and Version 1 by Alexandru Iulian Alexan.

WThearable technologies have slowly invaded our lives and can easily help with our day-to-day tasks. One area where wearable devices can shine is in human a proposed system consists of a real-time activity recognition, as they can gather sensor data in a non-intrusive way. We propose a human activity recognition system that is extensible, due to the wide range of sensing devices that can be integrated, and that provides a flexible deployment system system based on a common wearable device: a smartwatch. The machine learning component recognizes activity based on plot images generated from raw sensor data. This service i and is exposed as a Web API that can be deployed locally or directly in the cloud. The proposed system aims to simplify the human activity recognition process by exposing such capabilities via a web API. This web API can be consumed by small-network-enabled wearable devices, even with basic processing capabilities, by leveraging a simple data contract interface and using raw data. The system replaces extensive pre-processing by leveraging high-performance image recognition based on plot images generated from raw sensor data. We have managed to obtain an activity recognition rate of 94.89% and to implement a fully functional real-time human activity recognition systemmicroservice.

  • human activity recognition
  • plot image analysis
  • real-time
  • cloud
  • ML.NET

OThe concept of a smart hour main objective was to implement a real-time cloud-based human se or smart living environment is more current than ever, as more and more everyday devices are equipped with smart capabilities. The importance of activity recognition system that uses image classification at its core. The system should be able to expose HAR capabilities from the cloud or locally via rest API calls. In order to achieve this, a .NET core C\#-based web API was implemented to expose thresearch lies in the advantages of being able to monitor and assist a person who uses smart sensors. Internet of Things (IoT) technology is applied in multiple domains such as medicine, manufacturing, emergency response, logistics, and daily life activity recognition functionalityn. The main HAR functionality was achieved using a deep neural network created based on a pre-trained TensorFlow Resnet50 DNN model and trained for HAR using plot images. The created system was trained using data from the WISDM dataset, which is open access. The trainingsmartphone is one of the most used devices for HAR as it can record and process information itself. The major downside of using a smartphone for detecting the user's activity is data downtime for not wearing the device. If the smartphone is not worn or directly used by the user, the system does not receive any relevant data were generated using a separate .NET core C\# application that generatedregarding the current activity and, thus, the HAR precision decreases. Smartphones are not necessarily worn consistently, as the wear position will greatly vary depending on the plot images based on raw accelerometer and gyroscope data. Since the proposed system can also be deployed to the cloud, ierson and situation. A watch has a more stable wear pattern, as it is primarily worn on the user's wrist and is usually worn extensively for long periods of time. A smartwatch is a small device that can be easily expanded to support multiple sensor modules and users at the same time based on its rest implementaand non-intrusively worn for long periods of time, making it ideal for data acquisition.

SincFor HAR base only raw ad on accelerometer and gyroscope data are us, provided by the proposed HAR platform, we should be able to expand the usable sensor modules to anysensors also found in a smartwatch, the classic approach is to use the raw sensor module that has an accelerometer, gyroscope, and network capabilities. This creates a kind of hardware abstraction layer, as this can be simply implemdata and preprocess it. Features are then extracted and used to train a neural network for activity recognition. In this scenario, the neural network input is represented with a relatively low number of physical components. This is a very simple option to allow basic network-enabled sensor modules to gain ML.NET deep neural network capabilities. The custom hardware implementation of wrist-wornby a series of numeric values that try to capture the essence of that particular activity. Based on the raw sensor modules is also a viable option to be integrated into the proposed system, as we are using a web API for the final system integration.

Odata or even extracted features, plot images can be generated and fed to the neural network as inpurt contributions to the HAR field are as follows:

1.data instead of numerical values. In Tthe implementation of a real-time system for is scenario, the human activity recognition that can operate locally and in the cloud via rest API callsask becomes an image classification task, trying, in essence, to identify the activity based on imagthe plot recognition:

 image using         -Thspe implementation and usage of a .NET C\# console applcific image classification to generate label images based on raw accelerometer and gyroscope sensor data;

 neural networks. The numerical data  that      -The creation of a .NET C\# application that contains a deep neural network that was created based on a pre-trained TensorFlow DNN model and trained for HAR usingare turned into a plot image can be graphically represented in multiple ways depending on the type of plot images;

   and      -The integration of the created and trained neural network in a .NET Web API application capable of real-timthe input raw data structure; these variations can have a significant impact on the activity recognition based on rrate.

Thest API calls;

         -The fu`WISDM Smartpher extension of the HAR Web API application capabilities to allow cloud-based aone and Smartwatch Activity recognition.

2.and Biometrics ThDatase analysis of multiple scenarios for plot image generation configuration and plot types and the evaluation of the obtainedt' dataset was chosen for this implementation. This dataset is extensively used for human activity recognition precision results.

3. and was chosen as it is Wone concluded that a real-time HAR system, based on plot imageof the most important and used datasets for human activity recognition and REST requestbased on wearable devices.

We des, can be a goodribe a HAR system architecture forble to perform real-time activity recognition.

I based on ointerder to simplify the classic time-seriesnally generated plot images. The proposed system has multiple components:a data classification task, based on the sensor data, we can handle thepreprocessing app, a machine learning core processor, a machine learning processor Web API and a real-time Cloud human activity recognition task as an image classification one. The numeric sensorsystem. The 'Data preprocessing app' handles the conversion of raw movement data from the smartwatch are used toan accelerometer and gyroscope to a plot image. This conversion transform a series of values, consisting of a s a movement data window, in to a single data image. This way we can provide a visualimage that will be used as input for the machine learning algorithm. The 'Machine learning core processor' representation of a data chunks the main computing logic that is easier for a human to analyze and interpret the data manually. Each a machine learning implementation trained using the previously generated movement image can show certain characteristics for that particulars. After training, this component is capable of receiving an image and predicting its source activity type and different plot styles can be used. The main preprocessing application is written in C\# 6 and uses the ``ScottPlot'' plotting library for .NET. The pre. This behavior is supported only locally, as~this component does not support network interactions or advanced conversions from raw data to images. The 'Machine learning processing application is written as a .NET console application. 

Tor Web Api' is the component that incorporates the m'Machine learning core processor is the main component that is able to perform' and is able to support network connections and recognize human activity . The recognition. This component can train a neural network process can be based on thea generated plot images and allows this trained neural network to be used directly from a .NET core application. The hosting application wherimage but this component is also able to generate the training takes place is the same one as where the trained neural network is placed afterward; it is implemented in the form of a .NET core console application project. This allows the neural network to be created, trained, saved, and tested, all in one placeimage itself based on raw accelerometer and gyroscope data. So, in order to recognize what activity a series of movement data is part of, a simple API call is sufficient. The neural network can be later moved into another project to allow further development. This m'Real-time Cloud human activity recognition system' represents the 'Machine learning core processor project thus also contains the required logic for the model consumption and the logic required for the model to be retrained. The training time is different across different runs and ranges from 1.12 to 5.8 h depending on the size and number of the images used for training. After the training phase has been completed, the machine learning core processor can be used to run the activity recognition process locallyWeb Api' cloud correspondent, which is able to handle requests from multiple computer networks via the Internet. This component is not limited to a single local network and can be easily scaled and enhanced for the human activity recognition process. In this way, we can easily recognize real-time human activity from any source application that can make a web API request that contains movement data.

The From the projecain features developed are:

- t's consolhe application, any logic can be added to leverage theimplementation of a real-time system for human activity recognition functionality. The image data can be generated on the fly based on raw accelerometer data or the system can use a database as a buffer for the that can operate locally and in the cloud via rest API calls based on image files or movementplot recognition.

- data. The core processor functionality can be incorporated into anyhe implementation and usage of a .NET project type, like a desktop appC# console application or even a web application. The deep neural network model chosen forto generate label images based on raw accelerometer and gyroscope sensor data.

- the imagcre classification task is ``ImageClassifiation of a .NET C\# applicationMulti''. The available trainer for this image classification task is ``ImageClassificationTrainer'' and it trains a DNN network by using pre- that contains a deep neural network that was created based on a pre-trained TensorFlow DNN model and trained models for classifying for HAR using plot images,. in

- this case, Resnet50. For this, trainer normalization and integration of the cache are not required. A supervised ML task predicts the category or class of the image representing threated and trained neural network in a .NET Web API application capable of real-time activity type that we want to recognize. Each label starts as text and is converted into a numericition based on rest API calls.

- kthey via the ``TermTransform''. The image classifi further extension of the HAR Web API application algorithm output is a classifier that can recognize the class and thecapabilities to allow cloud-based activity type for a provided image.

Based recognition.

- the machine learning core processor module, that is able to recognize human activity relying on the movement-generatednalysis of multiple scenarios for plot image, a Web API application was built to expose this functionality to other components inside the local network. In this way, any device can receive the generation configuration and plot types and the evaluation of the obtained activity type as a response by making an API request recognition precision results.

- contacludining either an already-generated g that a real-time HAR system, based on plot image or the raw data required to generate the plot image. Since we are handling all the main processing in a Web API application, wrecognition and REST requests, can be a good system architecture for real-time activity recognition.

We can usave the received data, resulting in a database, and even send ed the WISDM dataset to train a real-time system notifications to other linked subsystems or componenthuman activity recognition system that is based on certain events. For example, email notifications can be sent if the system encounters an activity that is out of the ordinary based on certain logic. A Web API application type component is useful to simplify the system architecture as it is scalable and allows the other subsystems to easily communicate using a fast and reliable method using proven protocols and technologies. Since we are using a stateless design, the lower components that make the data acquisition do not need to be very powerful from the computing perspectivea Resnet50 neural network that achieved the best precision of 94.89% using scatter plot images with overlapping scatter plots. Raw accelerometer and gyroscope data from the previously mentioned WISDM dataset are both used to generate the plot images that consist of the input data for the neural network training process.

For Tthe only requirement is to be able to generate HTTP requests, compared to a real-time system designed around sockets, where communication is achieved via a bidirectional opened channelbtained results, the following activities were used: Walking, Jogging, Stairs, Sitting, Standing, Typing, and Brushing Teeth, for five selected users. The lower layer of the acquisition device can gather data, based on window size, and when the data has reached the window size, an API request can be created with the entire window payload. The frequency of the API requests is clearly dependent on the chosen window size and whether we want to overlap data windows or not. The web API project was built in-house and is based on the .NET framework and structured in the form of a minimal API project in .NET core 7. Minimal API was chosen as it is perfect for this kind of implementation due to its low file count and clean architecture. Due to the .NET Core cross-platform nature, this projectusage of a reduced dataset is due to the large size and number of the generated image data sets, as we have reduced the available total number of 18 activities to 7 activities and the total number of 30 users to a smaller one of 5. Reducing the number of analyzed users and activities provided a decent working dataset and the decrease of the generated plot image dimensions further decreased the side of the generated plot image dataset.

We can bcle deployed on multiple platforms, like Windows and Linux, and supports cloud integration as well. The minimal API file structure features a small number of configurarly notice that Scatter plot images with the overlapping scatter plots method obtained the best result. Another method that obtained decent results is using population files and one single code entry point. The already trained network is loaded from the generated machine learning model zip archiveimages with the 'BarMeanStDev' option. We managed to obtain a decent precision with a maximum value of 94.89% when using the ``FromFile'' extension when registering the prediction engine poolscatter plot images with overlapping scatter~plots.

The Web API project features two main endpoints used for activity recognition, one that is able to detect human activity based on a movement data plotn order to further improve the accuracy of the implemented system, additional plot-type image and the second that is able to detect human activity based on a window of movement data gathered from an accelerometer and gyroscope. The OpenAPI specification represents the standard for defining RESTful interfaces, providing a technology-agnostic API interface that s can be analyzed to see if we can gain a performance boost. Other types of neural networks can also be analyzed, and other custom TensorFlow models may even provides API development and consumption support. Swagger is the tool that allows for OpenAPI specification generation and usage in our web API project. Swagger contains powerful tools to fully use the OpenAPI Specification.

We a better implementation to further expand the system. The system could be expanded to use more data from the initial dataset, by increasing the number of managed to obtain a decent human activity recognition rate of 94.89%, using the following activities subset: Walking, Jogging, Stairs, Sitting, Standing, Typinglyzed activities and users and trying to optimize the training time by using a more powerful training machine, and Brushing Teeth, demonstrating that a real-time HAR system based on plot image recognition and REST requests can be a good system architecture for a real-time activity recognition systemtrying to lower the training time using other pre-trained deep neural networks.

 

Video Production Service