Particle Swarm Optimisation in Internet of Things: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

The internet of things, a collection of diversified distributed nodes, implies a varying choice of activities ranging from sleep monitoring and tracking of activities, to more complex activities such as data analytics and management. With an increase in scale comes even greater complexities, leading to significant challenges such as excess energy dissipation, which can lead to a decrease in IoT devices’ lifespan. Internet of things’ (IoT) multiple variable activities and ample data management greatly influence devices’ lifespan, making resource optimisation a necessity. Existing methods with respect to aspects of resource management and optimisation are limited in their concern of devices energy dissipation.

  • particle swarm optimisation
  • clustering
  • resource scheduling
  • resource allocation
  • resource optimisation

1. Introduction

The internet of things (IoT), the umbrella word for extending the internet beyond smartphones and computers to a whole range of things such as appliances, smart sensors, cars, and traffic lights [1], requires effective management for supreme efficiency. Over the years, IoT’s increasing popularity and ubiquitous nature consisting of numerous distributed nodes with sensing, computing, and communication capabilities [2] implies an even greater increase in delayed- sensitive data generation that requires quick resources for execution [3]. The massive rise in data generation and consumption on account of the application’s requirement for higher data-rates, larger bandwidth, increased capacity, low latency, and high throughput [4] generates several challenges including but not limited to tremendous traffic pressure in the network and traditional cloud computing architectures [1]. The dynamically changing demand for resources with limited supply increases the risk of low quality of service (QoS) [5], consequently lowering users’ quality of experience (QoE). The optimisation of resource allocation techniques and satisfying the user QoS requirements are principal issues in an IoT-based cloud computing environment [6].
Real-world scenarios can all be described by the resource allocation problem (RAP); several formulations for the RAP have been proposed in accordance with different problem scenarios [7,8,9]. RAPs for IoT are large-scale with numerous facets; these RAPs in the IoT environment, referred to as multiple objective resource allocation problems (MORAP), are nondeterministic polynomial (NP)-complete and therefore cannot be handled by deterministic algorithms [7,8,10]. Over the years, NP algorithms such as evolutionary algorithms (EA), particularly genetic algorithms (GA), have been studied to find near-optimal solutions; however, GA algorithms have a tendency to reproduce a large number of infeasible solutions during search processes [10]. The limitations of GA prompted [7] therefore to propose a solution for the nonlinear MORAP, a particle swarm optimisation (PSO) metaheuristic clustering approach with focus on the Pareto-optimal solutions, that is, solutions that are not dominated by any other solutions, where no preference criterion is made better-off without making at least one preference criterion worse off. A Pareto optimal solution based on resource ready time is used to transform problems into multi-objective decisions to solve scheduling problems; it has been shown that the proposed Pareto optimisation algorithms produce optimal solutions [7,11]. Considering that IoT’s main function revolves around environment monitoring, data collection, and processing, prolonging the network’s lifespan by optimizing energy consumption has become a critical issue that necessitates utmost attention [12,13]. The inefficient utilisation of resources in IoT nodes may cause sensor and actuator nodes to be prematurely lost, due to their small battery power [13]. Consequently, edge computing (EC) paradigms, centring on edge nodes, have increasingly become popular for solving IoT’s MORAP; this shift in approach has brought about ample research on resource allocation (RA) as a means for managing traffic pressure and challenges associated with IoT and cloud computing architectures. It has been well established that the edge computing paradigm comes with some problems similar to cloud computing; such problems include scheduling service resources, ensuring quality of service (QoS), and combining different services [1,6,7].
Edge nodes entailing varying devices also comprise sensor nodes; these sensor nodes are more prone to sensing coverage, that is, the sensors capacity of supervising a specified area of interest; connectivity, specifically sensors’ communication capacity; and energy-consumption problems owing to its rechargeable battery constraints [12]. Wireless sensor networks (WSN), a subsection of IoT, can be considered a necessity for the development of IoT as they are incorporated in a wide range of applications such as smart-homes, smart-transport, and smart-healthcare [14]. WSN have repeatedly proven to be the bane of IoT problems as a malfunction in WSN’s performance will inevitably lead to some form of IoT system failure. Sensor edge nodes are plagued with low power and limited storage capacity; thus, when developing a model, it is necessary to consider energy dissipated due to activities. Prevalent approaches of the current state of the art are centralised algorithms with or without CH rotation properties that do not take into consideration the heterogeneous nature of IoT devices in terms of varying power, processing, and storage capacity. These existing systems assume all nodes perform equally; however, with this expectation comes sensitive latency issues and an inadequate load-balancing strategy alongside the inadequate optimisation of resources in IoT systems [7,10,14,15]. Load-balancing activities, a subset of the RAP, should most importantly decrease energy consumption whilst increasing QoS [5]. The work of [14] developed a centralised architecture-based clustering algorithm for load-balancing in IoT called C-LBCA. Using PSO for cluster formation, the developed C-LBCA algorithm advocates an architecture where the software-defined network (SDN) controller responsible for complex computations is implemented over the cloud with the aim of reducing functionality in the WSNs nodes. Presented extension simulation results in terms of network lifetime, energy dissipation, and volume of data sent to the sink validate the proposed C-LBCA. Although their work produced results indicating the significant extension of the battery life of IoT devices over the energy-aware clustering algorithm (PSO-C), stable election protocol (SEP), and low-energy adaptive clustering hierarchy (LEACH), their focus was limited to the effects of load balancing on the lifespan of WSN, and no outputs were presented with regard to the efficacy of the algorithm in overall system performance, especially when it is clear that the lifespan of the edge nodes including sensor nodes inadvertently affects the system’s turnaround time. Moreso, considering IoT’s increasing applicability to everyday living, their centralised network architecture approach is susceptible to multiple problems associated with nodal failure. Centralised topologies continuously expose IoT systems to heightened potential complexities and system failure with just a single point nodal failure.
In a bid to avoid the problem associated with potential node failure consequent of the centralised topology, the authors of [15] incorporated standards that allow for CH rotation; however, in this approach, the issue of inadequate maintenance persists and increases complexities. In their work, the authors of [15] developed a balanced energy-efficient (BEE) clustering algorithm that can elect CHs according to both energy consumption and sensor distributions extending the network’s longevity whilst maintaining the network’s coverage. Even though the authors of [15] generated results that indicated dominance over LEACH, hybrid energy-efficient distributed (HEED), and a non-cluster routing solution (direct routing), their approach did not cater for the heterogeneous IoT system as it made assumptions that all nodes had the same level of energy storage. Although most edge nodes have a longer lifespan than sensor nodes, they, like the sensor nodes, are also plagued with connectivity and energy consumption problems, similarly owing to their rechargeable battery constraints.

2. The PSO, RR, and RB Algorithms

Particle swarm optimisation (PSO), a stochastic optimisation technique based on the movement and intelligence of swarms, is an increasingly popular algorithm for solving resource-scheduling problems owing to its simplicity in operation and speed in convergence [1]. PSO, also considered an EA due to its use of mechanisms motivated by nature, starts from a random solution and determines the optimal solution by iteration; previously good positions attract the particles, and evolution takes place if a particle flies to a better contour [2,10]. PSO has been found to be superior to GA considering that PSO carries global and local search simultaneously, as opposed to GA concentration on just the global search. Although PSO has numerous advantages and has presented outstanding performance in solving practice optimisation problems, IoT inclusive, it still has inherent defects that make it prone to falling into local optimum, premature convergence, and excess overhead cost [1]. When discussing the evolution of the particle swarm, the search space instead of the problem space is of concern; each particle’s position can be regarded purely as a D-dimensional vector, and for a single point in the D-dimensional search space (D = 2T), similarly, each particle’s velocity is a D-dimensional vector [10]. The particle-swarm algorithm can search a large candidate solution space without gradient information. The PSO algorithm is simple in structure and fast in convergence, which is suitable for scheduling; however, the PSO is prone to the local optimum, with fast convergence necessitating that the global search be performed first. When an optimal solution is determined within a specific range, the local search ability is used to search for the optimal solution position [21]. Given the PSO’s unassuming and easy implementation characteristics, it is used to solve the problem of resource optimisation in IoT.
An effective way of mitigating and or completely avoiding data collision is to introduce some communication protocols to schedule the data transmission [22]. The round robin (RR) static algorithm, the oldest and simplest scheduling protocol or algorithm, which is usually used for multitasking, allows for equal time slot allocation of tasks/requests to resources/servers. Each task runs turn by turn pending on the cyclic queue bound by a time slice also referred to as quantum time [22,23]. Round robin is a real time pre-emptive algorithm that responds to real-time events; at each transmission instance, the round-robin regulates a node’s access pending a predetermined circular order [23].
Resource-based (RB) algorithms, on the other hand, use a heuristic methodology by means of a greedy approach, prioritizing the allocation of resources from the largest to the lowest [8]. That is, it iteratively selects the most demanding request/task and allocates it an appropriate and available resource/server for processing, thereby maximizing throughput and reducing power [8,9]. The RB dynamic algorithm is practical for heterogeneous requests/tasks, where records of performance of the resources over a period are accessed for finding the best match; this implicates the overall performance improvements.
The dynamic RB algorithm’s ability to adapt gives the RR-RB hybrid algorithm the opening of becoming a controlled adaptive algorithm for load balancing and consequently the resource optimisation of IoT systems. The deficiency of falling into the local optimum experienced by the PSO algorithm is tackled in the proposed model by the incorporation of the RR-RB hybrid algorithm. This meld ensures the best path is always presented as the best of both worlds is deployed, a cluster that is both efficient and void of quick CH death due to excess energy dissipation, thus allocating resources, balancing load efficiently, and prolonging network lifespan.

This entry is adapted from the peer-reviewed paper 10.3390/s23042329

This entry is offline, you can click here to edit this entry!
Video Production Service