Giant Armadillo Optimization: Comparison
Please note this is a comparison between Version 1 by Saikat Gochhait and Version 2 by Rita Xu.

A new bio-inspired metaheuristic algorithm called Giant Armadillo Optimization (GAO) is introduced, which imitates the natural behavior of giant armadillo in the wild. The fundamental inspiration in the design of GAO is derived from the hunting strategy of giant armadillos in moving towards prey positions and digging termite mounds.

  • optimization
  • bio-inspired
  • metaheuristic
  • giant armadillo

1. Introduction

There are many problems in mathematics, science, and real-world applications that have more than one feasible solution. These types of problems are known as optimization problems, and the process of obtaining the best feasible solution among all these existing solutions is called optimization [1]. Each optimization problem is mathematically modeled using three main parts: decision variables, problem constraints, and an objective function. The goal in optimization is to allocate appropriate values for decision variables so that the objective function is optimized by respecting the constraints of the problem [2]. There are numerous optimization problems in science, mathematics, engineering, technology, industry, and real-world applications that need to be solved using optimization techniques. Problem-solving techniques for solving optimization problems are classified into two classes: deterministic and stochastic approaches [3]. Deterministic approaches in two categories, gradient-based and non-gradient-based, are effective in solving linear, convex, continuous, differentiable, and low-dimensional problems [4]. However, as optimization problems become more complex, especially as the problem dimensions increase, deterministic approaches stop getting stuck in local optima [5]. This is despite the fact that many practical optimization problems are non-linear, non-convex, non-differentiable, non-continuous, and high-dimensional. The disadvantages of deterministic approaches in order to solve practical optimization problems in science have led to researchers’ efforts in designing stochastic approaches [6].
Metaheuristic algorithms are among the most efficient and well-known stochastic approaches that have been used to deal with numerous optimization problems. These algorithms are able to provide suitable solutions for optimization problems based on random search in the problem-solving space and benefit from random operators and trial-and-error processes. The optimization mechanism in metaheuristic algorithms starts with the random generation of a certain number of candidate solutions under the name of algorithm population. Then, these candidate solutions are improved during successive iterations and based on the population update steps of the algorithm. After the full implementation of the algorithm, the best candidate solution obtained is presented as a solution to the problem [7]. The nature of stochastic search results in no guarantee of definitively achieving the global optimum using metaheuristic algorithms. However, due to being close to the global optimum, the solutions obtained from metaheuristic algorithms are acceptable as pseudo-optimal [8]. The desire of researchers to achieve more effective solutions closer to the global optimum for optimization problems has led to the design of numerous metaheuristic algorithms [9]. These metaheuristic algorithms have been used to tackle optimization problems in various sciences, such as static optimization problems [10], green product design [11], feature selection [12], design for disassembly [13], image segmentation [14], and wireless sensor network applications [15].
Metaheuristic algorithms will be able to achieve effective solutions for optimization problems when they search the problem-solving space well at both global and local levels. Global search expresses the exploration power of the algorithm in the extensive search in the problem-solving space with the aim of discovering the main optimal area and preventing the algorithm from getting stuck in local optima. Local search represents the exploitation power of the algorithm in the exact search near the promising areas of the problem-solving space and the discovered solutions. In addition to exploration and exploitation abilities, what leads to the success of a metaheuristic algorithm in providing a suitable solution for an optimization problem is their balancing during the search process in the problem-solving space [16].

2. Giant Armadillo Optimization

Metaheuristic algorithms have been developed with inspiration from various natural phenomena, the behaviors of living organisms in the wild, genetic, biological, and physics sciences, game rules, human interactions, and other evolutionary phenomena. Metaheuristic algorithms are classified into five groups based on the main idea in design: swarm-based, evolutionary-based, physics-based, human-based, and game-based approaches. Swarm-based metaheuristic algorithms are inspired by the lifestyles of animals, birds, insects, aquatics, reptiles, and other living creatures in the wild. The most well-known algorithms in this group are: Particle Swarm Optimization (PSO) [17][18], Ant Colony Optimization (ACO) [18][19], Artificial Bee Colony (ABC) [19][20], and Firefly Algorithm (FA) [20][21]. PSO is inspired by the group movement of flocks of birds and fish towards food sources. ACO is inspired by the ability of ants to discover the optimal communication path between the colony and the food source. ABC is inspired by the activities of colony bees searching for food sources. FA is inspired by optical communication between fireflies. The Grey Wolf Optimizer (GWO) is a swarm-based metaheuristic algorithm that is inspired by the hierarchical leadership structure and social behavior of gray wolves during hunting [21][22]. Green Anaconda Optimization (GAO) is inspired by the ability of male green anacondas to detect the position of females during the mating season and the hunting strategy of green anacondas [22][23]. Among the natural behaviors of living organisms in the wild, foraging, hunting, digging, migration, and chasing are much more prominent and have been employed in the design of algorithms such as: Honey Badger Algorithm (HBA) [23][24], African Vultures Optimization Algorithm (AVOA), Whale Optimization Algorithm (WOA) [24][25], Orca Predation Algorithm (OPA) [25][26], Reptile Search Algorithm (RSA) [26][27], Kookaburra Optimization Algorithm (KOA) [27][28], Mantis Search Algorithm (MSA) [28][29], Liver Cancer Algorithm (LCA) [29][30], Marine Predator Algorithm (MPA) [30][31], Tunicate Swarm Algorithm (TSA) [31][32], White Shark Optimizer (WSO) [32][33], and Golden Jackal Optimization (GJO) [33][34]. Evolutionary-based metaheuristic algorithms are designed with inspiration from genetic and biological sciences, concepts of natural selection, survival of the fittest, Darwin’s theory of evolution, and evolutionary operators. Genetic Algorithm (GA) [34][35] and Differential Evolution (DE) [35][36] are the most famous algorithms of this group, which are developed inspired by the reproduction process, genetic and biological concepts, and evolutionary-random operators of crossover, selection, and mutation. Artificial Immune Systems (AISs) are inspired by the mechanisms of the human body’s immune system against microbes and diseases [36][37]. Some other evolutionary-based metaheuristic algorithms are: Genetic programming (GP) [37][38], Cultural Algorithm (CA) [38][39], and Evolution Strategy (ES) [39][40]. Physics-based metaheuristic algorithms are designed with inspiration from the phenomena, forces, transformations, laws, and concepts of physics. Simulated Annealing (SA) is one of the most widely used algorithms of this group, which is inspired by the annealing process of metals, in which metals are first melted under heat, then slowly cooled with the aim of achieving an ideal crystal. Physical forces and Newton’s laws of motion have been the source of design in algorithms such as the Momentum Search Algorithm (MSA) [40][41] based on momentum force, the Gravitational Search Algorithm (GSA) based on gravitational attraction force [41][42], and the Spring Search Algorithm (SSA) [42][43] based on the elastic force of the spring and Hooke’s law. Cosmological concepts have been the origin of design in algorithms such as the Multi-Verse Optimizer (MVO) [43][44] and the Black Hole Algorithm (BHA) [44][45]. Some other physics-based metaheuristic algorithms are: Archimedes Optimization Algorithm (AOA) [45][46], Water Cycle Algorithm (WCA) [46][47], Artificial Chemical Process (ACP) [47][48], Chemotherapy Science Algorithm (CSA) [48][49], Nuclear Reaction Optimization (NRO) [49][50], Henry Gas Optimization (HGO) [50][51], Electro-Magnetism Optimization (EMO) [51][52], Lichtenberg Algorithm (LA) [52][53], Thermal Exchange Optimization (TEO) [53][54], and Equilibrium Optimizer (EO) [54][55]. Human-based metaheuristic algorithms are designed with inspiration from thoughts, choices, decisions, communication, interactions, and other human activities in individual and social life. Teaching-Learning-Based Optimization (TLBO) is one of the most famous human-based metaheuristic algorithms, which is introduced with the inspiration of educational communication in the classroom environment and the exchange of knowledge between teachers and students and students with each other [55][56]. The Mother Optimization Algorithm (MOA) is proposed based on the modeling of Eshrat’s care of her children [56][57]. Doctor and Patient Optimization (DPO) is introduced based on modeling the process of treating patients by doctors [57][58]. Sewing Training-Based Optimization (STBO) is proposed with the inspiration of teaching sewing skills by the instructor to students in sewing schools [58][59]. Ali Baba and the Forty Thieves (AFT) is presented based on modeling the strategies of forty thieves in the search for Ali Baba [59][60]. Some other human-based metaheuristic algorithms are: Election-Based Optimization Algorithm (EBOA) [60][61], Coronavirus Herd Immunity Optimizer (CHIO) [61][62], Group Teaching Optimization Algorithm (GTOA) [62][63], Ebola Optimization Search Algorithm (ESOA) [63][64], Driving Training-Based Optimization (DTBO) [5], Gaining Sharing Knowledge-Based Algorithm (GSK) [64][65], and War Strategy Optimization (WSO) [65][66]. Game-based metaheuristic algorithms are inspired by the rules governing individual and team games and the strategies of players, coaches, referees, and other influential people in these games. Darts Game Optimizer (DGO) is one of the most well-known game-based metaheuristic algorithms, whose design is inspired by the strategy and skill of players in throwing darts and collecting points [66][67]. Hide Object Game Optimizer (HOGO) is proposed based on the simulation of players’ strategies for finding the hidden object on the playing field [67][68]. The Orientation Search Algorithm (OSA) is designed based on modeling the players’ position changes on the playing field based on the referee’s commands [68][69]. Some other game-based metaheuristic algorithms are: Ring toss game-based optimization (RTGBO) [69][70], Football Game Based Optimization (FGBO) [70][71], Archery Algorithm (AA) [6], Golf Optimization Algorithm (GOA) [71][72], and Volleyball Premier League (VPL) [72][73]. Some other recently proposed metaheuristic algorithms are: Monarch Butterfly Optimization (MBO) [73][74], Slime Mould Algorithm (SMA) [74][75], Moth Search Algorithm (MSA) [75][76], Hunger Games Search (HGS) [76][77], Runge Kutta method (RUN) [77][78], Colony Predation Algorithm (CPA) [78][79], weighted mean of vectors (INFO) [79][80], Harris Hawks Optimization (HHO) [80][81], and Rime optimization algorithm (RIME) [81][82].
Video Production Service