Human Activities Recognition Based on Wrist-Worn Wearable Devices: Comparison
Please note this is a comparison between Version 1 by Alexandru Iulian Alexan and Version 3 by Catherine Yang.

TWearable technologies he proposed system consists of a real-timeave slowly invaded our lives and can easily help with our day-to-day tasks. One area where wearable devices can shine is in human activity recognition, as they can gather sensor data in a non-intrusive way. We propose a human activity recognition system based on a common wearable that is extensible, due to the wide range of sensing device: a smartwatchs that can be integrated, and that provides a flexible deployment system. The machine learning component recognizes activity based on plot images generated from raw sensor data and. This service is exposed as a Web Application Programming Interface (API) microservicePI that can be deployed locally or directly in the cloud. The proposed system aims to simplify the human activity recognition process by exposing such capabilities via a web API. This web API can be consumed by small-network-enabled wearable devices, even with basic processing capabilities, by leveraging a simple data contract interface and using raw data. The system replaces extensive pre-processing by leveraging high-performance image recognition based on plot images generated from raw sensor data. We have managed to obtain an activity recognition rate of 94.89% and to implement a fully functional real-time human activity recognition system.

  • human activity recognition
  • plot image analysis
  • real-time
  • cloud
  • ML.NET

1. Introduction

TheOur main concept of a smart house or smart living environment is more current than ever, as more and more everyday devices are equipped with smart capabilities. The importance of bjective was to implement a real-time cloud-based human activity recognition system that uses image classification at its core. The system should be able to expose HAR capabilities from the cloud or locally via rest API calls. In order to achieve this, a .NET core C\#-based web API was implemented to expose the activity recognition research lies in the advantages of being able to monitor and assist a person who uses smart sensors. Internet of Things (IoT) technology is applied in multiple domains such as medicine, manufacturing, emergency response, logistics, and daily life activity recognition. The smartphone is one of the most used devices for HAR as it can record and process information itself. The major downside of using a smartphone for detecting the user's activity is data downtime for not wearing the device. If the smartphofunctionality. The main HAR functionality was achieved using a deep neural network created based on a pre-trained TensorFlow Resnet50 DNN model and trained for HAR using plot images. The created system was trained using data from the WISDM dataset, which is open access. The training data were generated using a separate .NET core C\# application that generated the plot images based on raw accelerometer and gyroscope data. Since the proposed system can also be deployed to the cloud, it can be easily expanded to support multiple sensor modules and users at the same time based on its rest implementation.

Since is not worn or directlyonly raw accelerometer and gyroscope data are used by the user, the system does not receive any relevant data regarding the current activity and, thus, the HAR precision decreases. Smartphones are not necessarily worn consistently, as the wear position will greatly vary depending on the person and situation. A watch has a more stable wear pattern, as it is primarily worn on the user's wrist andproposed HAR platform, we should be able to expand the usable sensor modules to any sensor module that has an accelerometer, gyroscope, and network capabilities. This creates a kind of hardware abstraction layer, as this can be simply implemented with a relatively low number of physical components. This is a very simple option to allow basic network-enabled sensor modules to gain ML.NET deep neural network capabilities. The custom hardware implementation of wrist-worn sensor modules is also a viable option to be integrated into the proposed system, as we are using a web API for the final system integration.

Our contris usually worn extensively for longbutions to the HAR field are as follows:

1. The imperiods of time. A smartwatch is a small devicelementation of a real-time system for human activity recognition that can be easioperate locally and non-intrusively worn for long periodsin the cloud via rest API calls based on image plot recognition:

          of-The time, making it ideal for data acquisition.

Fimplementation and usage of a .NET C\# console application to gener HARate label images based on araw accelerometer and gyroscope data,sensor data;

        provid -Thed by sensors also found in a smartwatch, the classic approach is to use the raw sensor data and preprocess creation of a .NET C\# application that contains a deep neural network that was created based on a pre-trained TensorFlow DNN model and trained for HAR using plot images;

         -The int. Features are then extracegration of the created and used to train atrained neural network for in a .NET Web API application capable of real-time activity recognition. In this scenario, based on rest API calls;

        t -The neural network input is represented by a series of numeric values that try to capture the essence of that particularfurther extension of the HAR Web API application capabilities to allow cloud-based activity recognition.

2. BThe anased on the raw sensor data or even extracted features,lysis of multiple scenarios for plot image generation configuration and plot images can be generated and fed to the neural network as inputtypes and the evaluation of the obtained activity recognition precision results.

3. We concludedata instead of numerical values. In this scenario, the human that a real-time HAR system, based on plot image recognition and REST requests, can be a good system architecture for real-time activity recognition.

In order task becomes an imageo simplify the classic time-series data classification task, trying, in essence, to identify the abased on the sensor data, we can handle the human activity based on the plot image using specificrecognition task as an image classification neural networksone. The numerical sensor data that are turned into a plot image can be graphically represented in multiple ways depending on the type of plotfrom the smartwatch are used to transform a series of values, consisting of a data window, into a single data image and the input raw data structure; these variations can have a significant impact on the activity recognition rate.

T. This way we can provide a visual representation of a data chunk that is easier for a he `WISDM Sumartphone and Smartwatch Activity and Biometrics Dataset' dataset was chosen for this implementation. This dataset is extensively used for humann to analyze and interpret the data manually. Each image can show certain characteristics for that particular activity recognition and was chosen as it is one of the most important and used datasets for human activity recognition based on wearable devicestype and different plot styles can be used. The main preprocessing application is written in C\# 6 and uses the ``ScottPlot'' plotting library for .NET. The preprocessing application is written as a .NET console application. 

2. Human Activities Recognition Based on Wrist-Worn Wearable Devices

The rmachinesearchers describe a HAR system learning core processor is the main component that is able to perform real-timehuman activity recognition . This component can train a neural network based on internallythe generated plot images. The proposed system has multiple components:a data preprocess and allows this trained neural network to be used directly from a .NET core application. The hosting app, a machine learning core processor, a machine learning processor Web API and a real-time Cloud human activity recognition system. The 'Data preprocessing app' handlelication where the training takes place is the same one as where the trained neural network is placed afterward; it is implemented in the form of a .NET core console application project. This allows the conversion of raw movement data from an accelerometer and gyroscope to a plot image. This conversion transforms a movement data window to a single image that will be used as input for the machine learning algorithm. The 'Mneural network to be created, trained, saved, and tested, all in one place. The neural network can be later moved into another project to allow further development. This machine learning core processor' represents the main computing project thus also contains the required logic that is a machine learning implementation trained using the previously generated movement images. Afterfor the model consumption and the logic required for the model to be retrained. The training, this component is capable of receiving an image and predicting its source activity. This behavior is supported only locally, as~this component does not support network interactions or advanced conversions from raw data to images. The 'Machine lear time is different across different runs and ranges from 1.12 to 5.8 h depending on the size and number of the images used for training. After the training processor Web Api' is the component that incorporates the 'Mhase has been completed, the machine learning core processor' and is able to support network connections and recognize human can be used to run the activity. The recognition process can be based on a generated image but this component is also able to locally. From the project's console application, any logic can be added to leverage the activity recognition functionality. The image data can be generate the image itselfd on the fly based on raw accelerometer and gyroscope data. So, in order to recognize what activity a series ofdata or the system can use a database as a buffer for the image files or movement data is part of, a simple API call is sufficient. The 'Real-time Cloud human activity recognition system' represents the 'Machine learning processor Web Api' cloud correspondent, which is able to handle requests from multiple computer networks via the Internet. This component is not limited to a single local. The core processor functionality can be incorporated into any .NET project type, like a desktop application or even a web application. The deep neural network model chosen for the image classification task is ``ImageClassificationMulti''. The available trainer for this image classification task is ``ImageClassificationTrainer'' and it trains a DNN network and can be easily scaled and enhanced for the human activity recognition process. In this way, the researchers can easily recognize real-time humanby using pre-trained models for classifying images, in this case, Resnet50. For this, trainer normalization and the cache are not required. A supervised ML task predicts the category or class of the image representing the activity from any source application that can make a web API request that contains movement data.

Thetype that we want to recognize. Each label starts as text and is converted into a numain features developed are:

-eric key via the ``TermTransform''. tThe implementage classification of a real-time system for human activity recognitionalgorithm output is a classifier that can operate locally and in the cloud via rest API calls brecognize the class and the activity type for a provided image.

Based on ithe mage plot recognition.

-chine learning thcore implementation and usage of a .NET C# console application to processor module, that is able to recognize human activity relying on the movement-generate labeld plot images based on raw accelerometer and gyroscope sensor data.

- t, a Web API application was built to expose theis creation of a .NET C\# application that contains a deep neurfunctionality to other components inside the local network that was created based on a pre-trained TensorFlow DNN model and trained for HAR using. In this way, any device can receive the activity type as a response by making an API request containing either an already-generated plot image or the raw data required to generate the plot images.

- the Sintcegration of the created and trained neural network we are handling all the main processing in a .NET Web API application capable of real-time activity recognition based on rest API calls.

, we can save the received data, resulting in a database, and even send real- the further extension of the HAR Web API application capabilities to allow cloud-ime system notifications to other linked subsystems or components based activity recognition.

-on certain theve analysis of multiple scenarios for plot image generation configuration and plot types and the evaluation of the obtainednts. For example, email notifications can be sent if the system encounters an activity recognition precision results.

-that is out of the concluding that a real-time HAR system, based on plot image recognition and REST requests, can be a good rdinary based on certain logic. A Web API application type component is useful to simplify the system architecture for real-time activity recognition.

Tas it is scalable and allows the other researchers used the WISDM dataset to train a real-time human activity recognition system that is based on a Resnet50 neural networksubsystems to easily communicate using a fast and reliable method using proven protocols and technologies. Since we are using a stateless design, the lower components that achieved the best precision of 94.89% using scatter plot images with overlapping scatter plots. Raw accelerometer and gyroscope data from the previously mentioned WISDM dataset are both used to generate the plot images that consist of the input data for the neural network training process.

Formake the data acquisition do not need to be very powerful from the computing perspective. The only requirement is to be able to generate HTTP requests, compared to a real-time system designed around sockets, where communication is achieved via a bidirectional opened channel. The lower layer of the acquisition device can gather data, based on window size, and when the obdatained results, the following activities were used: Walking, Jogging, Stairs, Sitting, Standing, Typing, and Brushing Teeth, for five selected usersa has reached the window size, an API request can be created with the entire window payload. The usage of a reduced dataset is due to the large frequency of the API requests is clearly dependent on the chosen window size and number of the generated image data sets, as we have reduced the available total number of 18 activities to 7 activities and the total number of 30 users to a smaller one of 5. Reducing the number of analyzed users and activities provided a decent working datasewhether we want to overlap data windows or not. The web API project was built in-house and is based on the .NET framework and structured in the form of a minimal API project in .NET core 7. Minimal API was chosen as it is perfect for this kind of implementation due to its low file count and the decrease of the generated plot image dimensions further decreased the side of the generated plot image datasetclean architecture. Due to the .NET Core cross-platform nature, this project can be deployed on multiple platforms, like Windows and Linux, and supports cloud integration as well.

The minimal API file stresearchers  can clearly notice that Scatter plot images with the overlapping scatter plots method obtucture features a small number of configuration files and one single code entry point. The already trained the best result. Another method that obtained decent results isnetwork is loaded from the generated machine learning model zip archive using population images with the 'BarMeanStDev' optionthe ``FromFile'' extension when registering the prediction engine pool. 

The Web API presearchers managed to obtain a decent precision with a maximum value of 94.89% when using scatteroject features two main endpoints used for activity recognition, one that is able to detect human activity based on a movement data plot images with overlapping scatter~plots.

In and the second that is ordabler to further improve the accuracy of the implemented system, additional plot-type images can be analyzed to see if we can gain a performance boost. Other types of neural networks can also be analyzed, and other custom TensorFlow models may even to detect human activity based on a window of movement data gathered from an accelerometer and gyroscope. The OpenAPI specification represents the standard for defining RESTful interfaces, providing a technology-agnostic API interface that provide a better implementation to further expand the system. The system could be expanded to use more data from the initial dataset, by increasing the number ofs API development and consumption support. Swagger is the tool that allows for OpenAPI specification generation and usage in our web API project. Swagger contains powerful tools to fully use the OpenAPI Specification.

We manalyzed activities and users and trying to optimize the training time by using a more powerful training machineged to obtain a decent human activity recognition rate of 94.89%, using the following activities subset: Walking, Jogging, Stairs, Sitting, Standing, Typing, and trying to lower the training time using other pre-trained deep neural networksBrushing Teeth, demonstrating that a real-time HAR system based on plot image recognition and REST requests can be a good system architecture for a real-time activity recognition system. 

Video Production Service