AI-Based Resource Allocation Techniques in Wireless Sensor IoT: Comparison
Please note this is a comparison between Version 1 by Shruti Garg and Version 2 by Sirius Huang.

A wireless sensor network (WSN) is a network of several sensor nodes that are typically put in remote places to monitor environmental characteristics. The IoT (Internet of Things)-based restricted WSN has sparked a lot of attention and progress in order to attain improved resource utilisation as well as service delivery. For data transfer between heterogeneous devices, IoT requires a stronger communication network and an ideally placed energy-efficient WSN.

  • wireless sensor network
  • Internet of Things
  • resource allocation
  • energy efficiency
  • data optimization
  • deep learning

1. Introduction

One of the most important study topics in the cloud is resource allocation, which aims at increasing service provider profitability and achieving customer satisfaction by meeting promised service level agreements (SLA) conditions [1]. SLA must be signed by Service Providers and Cloud Users to assure Quality of Service (QoS) [2]. Resource allocation has been considered one of the most significant topics to address when dealing with SLA situations. Because the load on the physical server’s changes over time, resource allocations must be managed dynamically. Dynamic resource allocation is exceedingly difficult, especially when QoS needs to change over time while considering processor availability and minimising processor idle time. A WSN is a network of several sensor nodes that are typically put in remote places to monitor environmental characteristics. Sensors such as acoustic, pressure, motion, image, chemical, weather, pressure, temperature, and optical sensors are installed in the sensor nodes (SN). WSNs have a wide range of applications due to the diversity of SN, ranging from healthcare to military, defense, agricultural, and everyday life. Despite their large uses, WSNs have numerous common issues such as restricted energy sources, processing speed, memory, and communication bandwidth, causing SN performance to degrade as well as network lifetime to decrease [3]. Creating distinct algorithms for various purposes is a difficult endeavor. WSN designers must pay special attention to concerns such as data aggregation, clustering, routing, localization, fault detection, task scheduling, and event tracking, among others.
Wireless sensor nodes are small devices that detect atmospheric conditions including pressure, temperature, and humidity. They have a memory device to store the data and a channel to transfer it to BSas well as other devices. They are frequently dispersed, depending on the number of nodes used to collect data. Many studies have been previously conducted [4] that address these problems by applying methodologies derived from signal communication theory in telephony, with the primary goal of ensuring reliable data delivery without noise. Because there is no way to provide continuous power to sensors using a battery as a power source, researchers must focus on energy efficiency. Because of the limited energy sources, the sensor node has a short lifespan, which reduces the system’s network lifetime. Machine learning (ML) methods are known for their self-experiencing nature as well as the fact that they do not require reprogramming [5]. ML is a useful approach that allows for efficient, dependable, and cost-effective computing. Supervised learning, unsupervised learning, and RL are the three main forms. It has been discovered that machine learning technologies are effective in resolving key WSN difficulties. In the realms of IoT, M2M and CPS, these approaches have proven to be beneficial. ML may learn from a generalized structure and propose a generic solution to improve system performance. It is used in numerous scientific domains of medical, engineering, and computing, such as manual data entry, automatic spam detection, medical diagnosis, picture identification, data purification, noise reduction, and so on, due to its diverse uses. Recent research shows that machine learning has been used to overcome a variety of problems in WSNs. Using ML in WSNs enhances system performance while also reducing complicated chores such as reprogramming, manually accessing vast amounts of data, and extracting usable data from data. As a result, ML methods are very beneficial for retrieving enormous amounts of data as well as extracting meaningful data [6].

2. AI-Based Resource Allocation Techniques in Wireless Sensor IoT

ML and DL methods for data processing could make edge devices smarter while also improving privacy and bandwidth usage. The authors of [7] applied deep learning for IoT to an edge computing environment and provided a method for improving network speed while also protecting user privacy. The authors suggested adaptive sampling-based data reduction methodologies in [8]. These methods function by analyzing the level of variance between acquired data over time and dynamically altering the sampling frequency of the sensors. In instances where the gathered time series are stationary, adaptive sampling algorithms function well. These methods perform badly when dealing with rapidly changing data. The authors of [9] developed a dual prediction-based data reduction technique. The suggested technique works by developing and implementing a model that represents the sensed phenomenon on both the edge node and IoT devices. Prediction techniques have the advantage that the model at the edge predicts the detected measurement without requiring a radio connection until the prediction error exceeds a predetermined threshold. Work [10] demonstrated that man-made consciousness and machine learning can be used to advance to unavoidable systems. AI-assisted ML systems will aid in merging human intuition and ingenuity with AI capacity. AI-powered systems will assist in dynamically analyzing processing scenarios and adapting appropriate scheduling and resource allocation strategies. Ref. [11] suggests a framework for cloud computing systems to increase QoS while lowering the cost of providing services to end users. The system focuses on condensing VMs based on current resource utilization, creating virtual networks between VMs, and dynamically configuring virtual hubs. In WSN-aided IoT, ref. [12] proposed a QoS-aware safe DL technique for dynamic cluster-based routing. In the WSN, the author [13] built a DL link dependability prediction. This research designed a resilient routing algorithm for a better WSN routing mechanism. For lightweight subgraph extraction as well as labelling, a DLmethod known as Weisfeiler–Lehman kernel and dual convolutional neural network (WL-DCNN) technique is presented. A discussion of RA tactics is available in [14]. They used a multi-target optimization strategy in this research to trade off speed, cost, and availability in a cloud-based application, and their methodology might be up to 20% faster than existing optimization approaches. Their method has been confirmed. The author of [15] proposed a thermal cognizant workload programming strategy to overcome the excessive power consumption and hotness of data centers. They utilized an ANN to predict the data center’s heat effect. In [14], a heterogeneous scheduling model is described. Task resource utilization, as observed in the consolidation methods, was not taken into account. Ref. [15] has a download issue with cloud or fog computing. User fairness and the shortest possible delay are ensured by optimizing discharge results and allocating working out resources. The goal of this optimization problem is to reduce the weighted delay and energy consumption expenses. They devised low-complexity, suboptimal methods to address this NP-hard issue. As a result, half-definite relaxation and randomization are used to make discharge decisions, while fractional programming theory is used to manage resources. The authors of [16] provide a heuristic approach to resource allocation, and a TSA is presented. Modules such as divide and conquer TSA and resource allocation, modified analytical processes, the longest projected length of processing, and divisible scheduling with bandwidth knowledge are used in this technique. The tasks are processed before they are assigned. With relation to the load and bandwidth of the cloud holdings, the allocation is performed using BAR optimization and associated BATS algorithms [17] investigates EE for combining BSas well as beamforming in multicell situations. Ref. [18] investigates the energy and spectrum efficiency of 5G mobile MIMO networks. In [19], the authors developed an energy-efficient non-cooperative game for distributed CRN over interference channels. For CRNs and IoT, a noncooperative game based on power allocation is proposed in [20], which investigates a mesh adaptive search technique for device to device-assisted CRN; [21] proposes a gradient adaptation optimization for power allocation as well as EE in CRNs. Although gradient techniques are reliable, they can fail to accomplish global optimization. Heuristic algorithms are gaining popularity among researchers as a way to lower the computational complexity of optimization approaches. For NP-complete problems, heuristic techniques are simple to use and adapt. In [22], nonorthogonal multiple access (NOMA) is employed for IoT resource management in smart cities, and mixed-integer linear programming is presented for energy harvesting. To maximize EE and SE trade-off in CR-IoT, a mixed-integer nonlinear programming (MINLP) technique is presented. Optimizing and increasing the efficiency of this communication is an important consideration, and resource allocation is a critical bottleneck. Researchers are using innovative AI methods to optimize resource allocation according to the data flow during network operation to solve the challenge of resource allocation. These measures have moved the industry towards automated resource management on a large and complex scale.
Video Production Service