Pufferfish Optimization Algorithm: Comparison
Please note this is a comparison between Version 1 by Saikat Gochhait and Version 2 by Camila Xu.

Optimization problems are a kind of problem that have more than one feasible solution. According to this, optimization is the process of obtaining the best optimal solution among all feasible solutions for an optimization problem.

  • optimization
  • bio-inspired
  • metaheuristic
  • pufferfish
  • exploration
  • exploitation

1. Introduction

Optimization problems are a kind of problem that have more than one feasible solution. According to this, optimization is the process of obtaining the best optimal solution among all feasible solutions for an optimization problem [1]. From a mathematical point of view, any optimization problem can be modeled using three parts: decision variables, constraints, and the objective function of the problem. The main goal in optimization is to assign values to the decision variables so that the objective function is optimized by respecting the constraints of the problem [2]. There are numerous optimization problems in science, engineering, mathematics, technology, industry, and real-world applications that must be optimized using appropriate techniques. Problem solving techniques in dealing with optimization problems are classified into two groups: deterministic and stochastic approaches [3]. Deterministic approaches in two classes, gradient-based and non-gradient-based, have effective performance in optimizing convex, linear, continuous, differentiable, and low-dimensional problems [4]. Although, when problems become more complex and especially the dimensions of the problem increase, deterministic approaches are inefficient as they get stuck in local optima [5]. On the other hand, many practical optimization problems have features such as being non-convex, non-linear, discontinuous, non-differentiable, and high dimensions. The disadvantages and ineffectiveness of deterministic approaches in solving practical optimization problems with such characteristics have led researchers to develop stochastic approaches [6].
Metaheuristic algorithms are one of the most effective stochastic approaches for solving optimization problems, which can achieve suitable solutions for optimization problems based on random search in the problem-solving space and the use of random operators and trial and error processes. The optimization process in metaheuristic algorithms is such that the first several candidate solutions are initialized randomly in the problem-solving space under the name of algorithm population. Then, these candidate solutions are improved based on the steps of updating the algorithm population during successive iterations. After the full implementation of the algorithm, the best candidate solution obtained during the algorithm iterations is presented as a solution to the problem [7]. The random search process in the performance of metaheuristic algorithms provides no guarantee to achieving the global optimum, although the solutions obtained from metaheuristic algorithms are acceptable as quasi-optimal because they are close to the global optimum. Achieving more effective solutions closer to the global optimum for optimization problems has been a motivation for researchers to design numerous metaheuristic algorithms [8].
A metaheuristic algorithm, to have an effective search process to achieve a suitable solution for the optimization problem, must be able to search the problem-solving space well at both global and local levels. The goal in global search with the concept of exploration is to comprehensively scan the problem-solving space to avoid getting stuck in local optima and to discover the region containing the main optima. The goal in local search with the concept of exploitation is to scan accurately and with small steps in promising areas in the problem-solving space to achieve better solutions closer to the global optimum. Balancing exploration and exploitation during algorithm iterations and the search process in the problem-solving space is the key point in the success of the metaheuristic algorithm in addition to having a high ability in exploration and exploitation [9].

2. Pufferfish Optimization Algorithm

Metaheuristic algorithms have been developed by taking inspiration from various natural phenomena, lifestyles of living organisms, concepts in biological, genetics, physics sciences, rules of games, human interactions, and other evolutionary phenomena. According to the employed inspiration source in the design, metaheuristic algorithms are placed in five groups: swarm-based, evolutionary-based, physics-based, human-based, and game-based approaches. Swarm-based metaheuristic algorithms are developed with inspiration from the natural behavior and strategies of animals, insects, birds, reptiles, aquatics, and other living creatures in the wild. Particle Swarm Optimization (PSO) [10][11], Ant Colony Optimization (ACO) [11][12], Artificial Bee Colony (ABC) [12][13], and Firefly Algorithm (FA) [13][14] are among the most well-known swarm-based metaheuristic algorithms. PSO is designed based on modeling the movement of flocks of birds and swarms of fish that are searching for food. ACO is proposed based on modeling the ability of ants to explore the shortest communication path between the food source and the colony. ABC is introduced based on the modeling of the hierarchical activities of honeybees in an attempt to reach new food sources. FA is designed with inspiration from optical communication between fireflies. Pelican Optimization (PO) is another swarm-based metaheuristic algorithm, that is inspired by the strategy of pelicans during hunting [14][15]. Among the natural behavior of living organisms in the wild, the processes of hunting, foraging, chasing, digging, and migration are much more prominent and have been a source of inspiration in the design of swarm-based metaheuristic algorithms such as the Snake Optimizer (SO) [15][16], Sea Lion Optimization (SLnO) [16][17], Flying Foxes Optimization (FFO) [17][18], Mayfly Algorithm (MA) [18][19], White Shark Optimizer (WSO) [19][20], African Vultures Optimization Algorithm (AVOA) [20][21], Grey Wolf Optimizer (GWO) [21][22], Reptile Search Algorithm (RSA) [22][23], Whale Optimization Algorithm (WOA) [23][24], Golden Jackal Optimization (GJO) [24][25], Honey Badger Algorithm (HBA) [25][26], Marine Predator Algorithm (MPA) [26][27], Orca Predation Algorithm (OPA) [27][28], and Tunicate Swarm Algorithm (TSA) [28][29]. Evolutionary-based metaheuristic algorithms are developed with inspiration from the concepts of biology and genetics, natural selection, survival of the fittest, and Darwin’s evolutionary theory. The Genetic Algorithm (GA) [29][30] and Differential Evolution (DE) [30][31] are the most well-known algorithms of this group, whose design is inspired by the reproduction process, genetic concepts, and the use of random mutation, selection, and crossover operators. The Artificial Immune System (AIS) is introduced based on the simulation of the mechanism of the body’s defense system against diseases and microbes [31][32]. Some other evolutionary-based metaheuristic algorithms are the Cultural Algorithm (CA) [32][33], Genetic Programming (GP) [33][34], and Evolution Strategy (ES) [34][35]. Physics-based metaheuristic algorithms are developed with inspiration from laws, transformations, processes, phenomena, forces, and other concepts in physics. Simulated Annealing (SA) is one of the most well-known physics-based metaheuristic algorithms, which is developed based on the modeling of the metal annealing phenomenon. In this process, with the aim of achieving an ideal crystal, metals are first melted under heat, then slowly cooled [35][36]. Physical forces and Newton’s laws of motion have been fundamental inspirations in designing algorithms such as the Gravitational Search Algorithm (GSA) based on gravitational attraction force [36][37], the Momentum Search Algorithm (MSA) [37][38] based on momentum force, and the Spring Search Algorithm (SSA) [38][39] based on the elastic force of a spring. The Water Cycle Algorithm (WCA) is proposed based on the modeling of physical transformations in the natural water cycle [39][40]. Some other physics-based metaheuristic algorithms are Fick’s Law Algorithm (FLA) [40][41], Prism Refraction Search (PRS) [41][42], Henry Gas Optimization (HGO) [42][43], Black Hole Algorithm (BHA) [43][44], Nuclear Reaction Optimization (NRO) [44][45], Equilibrium Optimizer (EO) [45][46], Multi-Verse Optimizer (MVO) [46][47], Lichtenberg Algorithm (LA) [47][48], Archimedes Optimization Algorithm (AOA) [48][49], Thermal Exchange Optimization (TEO) [49][50], and Electro-Magnetism Optimization (EMO) [50][51]. Human-based metaheuristic algorithms are developed with inspiration from the thoughts, choices, decisions, interactions, communications, and other activities of humans in society or personal life. The Teaching–Learning-Based Optimization (TLBO) is one of the most widely used human-based metaheuristic algorithms, whose design is inspired by educational communication and knowledge exchange between teachers and students, as well as students with each other [51][52]. The Mother Optimization Algorithm (MOA) is introduced with inspiration from Eshrat’s care of her children [6]. The Election-Based Optimization Algorithm (EBOA) is proposed based on modeling the process of voting and holding elections in society [8]. The Chef-Based Optimization Algorithm (CHBO) is designed based on the simulation of teaching cooking skills by chefs to applicants in culinary schools [52][53]. The Teamwork Optimization Algorithm (TOA) is developed with the inspiration of collaboration among team members in providing teamwork in order to achieve specified team goals [53][54]. Some other human-based metaheuristic algorithms are Driving Training-Based Optimization (DTBO) [5], War Strategy Optimization (WSO) [54][55], Ali Baba and the Forty Thieves (AFT) [55][56], Gaining Sharing Knowledge-based Algorithm (GSK) [56][57], and Coronavirus Herd Immunity Optimizer (CHIO) [57][58]. Game-based metaheuristic algorithms are developed by taking inspiration from the rules of games as well as the behavior of players, coaches, referees, and other influential people in individual and team games. The Darts Game Optimizer (DGO) is one of the most well-known algorithms of this group, which is proposed based on modeling the competition of players in throwing darts and collecting more points in order to win the game [58][59]. The Golf Optimization Algorithm (GOA) is introduced based on the simulation of players hitting the ball in order to place the ball in the holes [59][60]. The Puzzle Algorithm (PA) is designed based on modeling the strategy of players putting puzzle pieces together in order to complete it according to the pattern [60][61]. Some other game-based metaheuristic algorithms are Volleyball Premier League (VPL) [61][62], Running City Game Optimizer (RCGO) [62][63], and Tug of War Optimization (TWO) [63][64].
Video Production Service