Ant Colony Optimization: Comparison
Please note this is a comparison between Version 2 by Rita Xu and Version 1 by Nikola Ivković.

Ant colony optimization (ACO) is a well-known class of swarm intelligence algorithms suitable for solving many NP-hard problems. An important component of such algorithms is a record of pheromone trails that reflect colonies’ experiences with previously constructed solutions of the problem instance that is being solved. By using pheromones, the algorithm builds a probabilistic model that is exploited for constructing new and, hopefully, better solutions.

  • ant colony optimization
  • pheromone update strategies
  • adjustment parameters

1. Introduction

Ant colony optimization (ACO) is a nature-inspired metaheuristic that has been used as a design template for many successful algorithms. These algorithms are mostly used to solve NP-hard combinatorial optimization problems, although ACO has also been used for continuous and mixed-variable optimization problems [1,2,3,4,5,6][1][2][3][4][5][6]. ACO algorithms are stochastic and cannot guarantee optimal solutions, but they are commonly applied to NP-hard optimization problems because, for such problems, exact algorithms generally cannot yield optimal solutions in a reasonable time. It is sometimes also possible to combine ACO techniques like pheromone trails with Monte Carlo methods to maintain some advantages from both procedures [7].
The key concept in ACO is the use of pheromone trails associated with solution components to allow the colony of artificial ants to accumulate collective knowledge and experience about the problem instance that they are trying to solve. These pheromone trails bias future attempts of artificial ants to construct better solutions. Being metaheuristic, ACO only gives general recipes on how to devise an algorithm. For solving a specific type of problem, it is necessary to devise a specific ACO algorithm, and success depends not only on the nature of the problem but also greatly on how these recipes are applied. It is important to enable good guidance in the search space by properly balancing between exploration and exploitation. A too greedy algorithm might result in fast convergence toward moderate or bad solutions, but too little greediness can result in slow convergence or unguided roaming through search space with a very low probability of obtaining good solutions. For some NP-hard problems, ACO can get substantial help from heuristic information, but for other NP-hard problems such useful heuristic information is unknown or maybe impossible. Heuristic information is very useful for problems where some kind of optimal route is objective, such as the traveling salesman problem (TSP) [8], asymmetric traveling salesman problem (ATSP) [8], sequential ordering problem (SOP) [9[9][10],10], various vehicle routing problems (VRPs) [11[11][12],12], car renter salesman problem (CaRS) [13], etc. In these cases, the searching process can be faster because of heuristic information, and thus, the parameters and strategies used in ACO should be adjusted accordingly.
The standard methods for reinforcing pheromone trails in modern ACO variants have diametrically opposite strategies, leaving a large gap between too much and too little greediness. In thiRes paper, we earchers conduct experimental research on generalized reinforcement strategies to gain better control over algorithmic behavior and, thus, achieve better performance in ant colony optimization algorithms. The research questions that weresearchers tried to answer were the following. Can adjustable pheromone reinforcement strategies improve the algorithmic performance of ACO algorithms? How can κ-best and max-κ-best be generalized or extended to provide less greedy algorithmic behavior than with κ = 1, in the case when this is desirable? How do different adjustable pheromone strategies influence the behavior of ACO algorithms (with and without local optimization) for combinatorial problems that have efficient heuristic information? For experiments performed to answer these research questions, wresearchers have chosen well-known instances of TSP and ATSP. Initial ideas for such strategies were presented at the conference [14], and here, wresearchers extend outheir proposal and carry out a comprehensive experimental evaluation. The results show that, by using adjustable reinforcement strategies, an ACO algorithm can obtain better solutions and allow greater control over algorithmic behavior.

2. Ant Colony Optimization

Improvements made by ant colony optimization variants are achieved by changing either the solution construction procedure or the pheromone update procedure. In the case of a solution construction procedure, almost all ACO algorithms use the random proportional rule introduced by the ant system [15,16][15][16] or the more general variant, the pseudo-random proportional rule, used by the ant colony system [17]. Many improvements in ant colony optimization variants are based on changing the way the pheromone trails are modified in the pheromone update procedure. These changes are more often concerned with the way the pheromone trails are reinforced than with the way the pheromone trails are evaporated. The initial variant of ACO, named the ant system [18], uses all solutions from the current iteration in the pheromone reinforcement procedure. The components of better solutions are proportionally reinforced with more pheromone value than the solutions with worse quality. This type of strategy provides only little guidance to the subsequent solution construction procedure in searching through the solution space. The pheromone reinforcement strategy introduced by the ant system causes too much exploration and too little exploitation of previous knowledge. In an attempt to improve algorithmic behavior, an ACO variant named the elitist ant system was proposed [16]. To make the algorithm greedier, in addition to pheromone reinforcement of all solutions in the current iteration, the components of the best solution found from the beginning of the algorithm (also called global-best solution) are reinforced with a weight determined by the parameter e. More selective in choosing solutions for pheromone reinforcement is the rank-based ant system, which uses weights for pheromone reinforcement based on solution ranks and reinforces only w solutions from the current iteration [19], where w is an integer parameter. Modern variants of the ACO algorithms use only one solution, in some sense “the best” solution, to reinforce pheromone trails. The ant colony system (ACS) uses the global-best solution for pheromone reinforcement [16]. The MAX-MIN ant system (MMAS) uses the iteration-best or global-best solution [8]. The most commonly used variant of ACO is probably the MMAS because of its excellent performance on many combinatorial optimization problems. When a faster and greedier variant is preferred, then ACS might be used instead of MMAS. The best–worst ant system uses only the global-best solution to reinforce pheromone trails but also decreases the pheromone trails associated with the components of the worst solution in the current iteration that are not contained in the global-best solution [20,21][20][21]. Although the authors of BWAS claimed good performance, the BWAS algorithm did not gain wide popularity. Population-based ant colony optimization uses the iteration-best solution to reinforce pheromone trails but also has a special mechanism to completely unmake the pheromone reinforcement made in the previous iteration, which is especially suitable for dynamic optimization problems [22,23][22][23]. There are also some studies about reinforcing pheromone trails that cannot be applied in general cases and address only specific situations. In [24], Deng et al. proposed a technique for situations when pheromone trails are associated with nodes instead of arcs of the construction graph. The best-so-far strategy is used together with new rules (these new rules are named r-best-node update rule and a relevant-node depositing rule; the first one has a somewhat similar name to ourthe κ-best strategy, although the actual methods are very different) proposed by the authors specifically for this type of problem. Their approach is applied to the shortest path problem, even though the path is not uniquely defined by the set of nodes and, therefore, pheromone trails associated with arcs would seem a more appropriate choice. Associating pheromone trails to nodes in the shortest path problem might have some success only if a path has a small subset of all possible nodes. Pheromone modification strategies are proposed for dynamic optimization problems [25,26][25][26]. These strategies are not related to pheromone reinforcement, but instead, they are about reacting to the change in problem to avoid restarting the algorithm. The pheromone trails are modified to recognize changes in the problem instance, which is applicable in cases where changes are small or medium. Rather complex rules for giving weights to additional pheromone values used in the pheromone reinforcement procedure of ACS are proposed in [27]. The authors performed an experimental evaluation of the proposed rules on three small instances of Euclidean TSP and claim better results than those obtained by AS and standard ACS. The pheromone update strategy that is based on the theory of learning automata, where additional pheromone values for reinforcement procedure depend not only on solution quality but also on current pheromone trails, is used in [28]. It is important to note that ACO is a general metaheuristic that has different variants, and among them, the important variants are the ant colony system (ACS) [9,17][9][17], MAX-MIN ant system (MMAS) [8], and three bound ant system (TBAS) [35,36][29][30].  As ACO belongs to stochastic optimization algorithms it is important to measure their performance using appropriate methodology. Although older sources recommended arithmetic mean as a measure of performance, the newer research showed that percentiles and median is much more suitable as a measure of algorithmic performance for ACO [38,39][31][32]

References

  1. Leguizamón, G.; Coello, C.A.C. An alternative ACOR algorithm for continuous optimization problems. In Proceedings of the 7th International Conference on Swarm Intelligence, ANTS’10, Brussels, Belgium, 8–10 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 48–59.
  2. Liao, T.; Socha, K.; Montes de Oca, M.A.; Stützle, T.; Dorigo, M. Ant Colony Optimization for Mixed-Variable Optimization Problems. IEEE Trans. Evol. Comput. 2014, 18, 503–518.
  3. Liu, J.; Anavatti, S.; Garratt, M.; Abbass, H.A. Modified continuous Ant Colony Optimisation for multiple Unmanned Ground Vehicle path planning. Expert Syst. Appl. 2022, 196, 116605.
  4. Liao, T.W.; Kuo, R.; Hu, J. Hybrid ant colony optimization algorithms for mixed discrete–continuous optimization problems. Appl. Math. Comput. 2012, 219, 3241–3252.
  5. Omran, M.G.; Al-Sharhan, S. Improved continuous Ant Colony Optimization algorithms for real-world engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 85, 818–829.
  6. Chen, Z.; Zhou, S.; Luo, J. A robust ant colony optimization for continuous functions. Expert Syst. Appl. 2017, 81, 309–320.
  7. Kudelić, R.; Ivković, N. Ant inspired Monte Carlo algorithm for minimum feedback arc set. Expert Syst. Appl. 2019, 122, 108–117.
  8. Stützle, T.; Hoos, H.H. MAX-MIN Ant System. Future Gener. Comput. Syst. 2000, 16, 889–914.
  9. Gambardella, L.M.; Montemanni, R.; Weyland, D. An Enhanced Ant Colony System for the Sequential Ordering Problem. In Operations Research Proceedings 2011, Proceedings of the International Conference on Operations Research (OR 2011), Zurich, Switzerland, 30 August–2 September 2011; Klatte, D., Lüthi, H.J., Schmedders, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 355–360.
  10. Skinderowicz, R. An improved Ant Colony System for the Sequential Ordering Problem. Comput. Oper. Res. 2017, 86, 1–17.
  11. Ky Phuc, P.N.; Phuong Thao, N.L. Ant Colony Optimization for Multiple Pickup and Multiple Delivery Vehicle Routing Problem with Time Window and Heterogeneous Fleets. Logistics 2021, 5, 28.
  12. Jia, Y.H.; Mei, Y.; Zhang, M. A Bilevel Ant Colony Optimization Algorithm for Capacitated Electric Vehicle Routing Problem. IEEE Trans. Cybern. 2022, 52, 10855–10868.
  13. Popović, E.; Ivković, N.; Črepinšek, M. ACOCaRS: Ant Colony Optimization Algorithm for Traveling Car Renter Problem. In Bioinspired Optimization Methods and Their Applications; Mernik, M., Eftimov, T., Črepinšek, M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 31–45.
  14. Ivkovic, N.; Malekovic, M.; Golub, M. Extended Trail Reinforcement Strategies for Ant Colony Optimization. In Swarm, Evolutionary, and Memetic Computing, Proceedings of the Second International Conference, SEMCCO 2011, Visakhapatnam, India, 19–21 December 2011; Lecture Notes in Computer Science; Panigrahi, B.K., Suganthan, P.N., Das, S., Satapathy, S.C., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 7076, pp. 662–669.
  15. Dorigo, M.; Maniezzo, V.; Colorni, A. Positive Feedback as a Search Strategy; Technical Report 91-016; Dipartimento di Elettronica, Politecnico di Milano: Milano, Italy, 1991.
  16. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 1996, 26, 29–41.
  17. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66.
  18. Dorigo, M. Optimization, Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milano, Italy, 1992. (In Italian).
  19. Bullnheimer, B.; Hartl, R.F.; Strauss, C. A New Rank Based Version of the Ant System: A Computational Study. Cent. Eur. J. Oper. Res. Econ. 1999, 7, 25–38.
  20. Cordón, O.; de Viana, I.F.; Herrera, F. Analysis of the Best-Worst Ant System and Its Variants on the QAP. In Ant Algorithms, Proceedings of the Third International Workshop, ANTS 2002, Brussels, Belgium, 12–14 September 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 228–234.
  21. Cordón, O.; de Viana, I.F.; Herrera, F. Analysis of the Best-Worst Ant System and its Variants on the TSP. Mathw. Soft Comput. 2002, 9, 177–192.
  22. Guntsch, M.; Middendorf, M. Applying Population Based ACO to Dynamic Optimization Problems. In Ant Algorithms, Proceedings of the Third International Workshop, ANTS 2002, Brussels, Belgium, 12–14 September 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 111–122.
  23. Guntsch, M.; Middendorf, M. A Population Based Approach for ACO. In Applications of Evolutionary Computing, Proceedings of the EvoWorkshops 2002: EvoCOP, EvoIASP, EvoSTIM/EvoPLAN, Kinsale, Ireland, 3–4 April 2002; Lecture Notes in Computer Science; Cagnoni, S., Gottlieb, J., Hart, E., Middendorf, M., Raidl, G.R., Eds.; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2279, pp. 72–81.
  24. Deng, X.; Zhang, L.; Lin, H.; Luo, L. Pheromone mark ant colony optimization with a hybrid node-based pheromone update strategy. Neurocomputing 2015, 148, 46–53.
  25. Guntsch, M.; Middendorf, M. Pheromone Modification Strategies for Ant Algorithms Applied to Dynamic TSP. In Applications of Evolutionary Computing, Proceedings of the EvoWorkshops 2001: EvoCOP, EvoFlight, EvoIASP, EvoLearn, and EvoSTIM, Como, Italy, 18–20 April 2001; Lecture Notes in Computer Science; Boers, E.J.W., Gottlieb, J., Lanzi, P.L., Smith, R.E., Cagnoni, S., Hart, E., Raidl, G.R., Tijink, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2037, pp. 213–222.
  26. Wang, L.; Shen, J.; Luo, J. Impacts of Pheromone Modification Strategies in Ant Colony for Data-Intensive Service Provision. In Proceedings of the 2014 IEEE International Conference on Web Services, ICWS, Anchorage, AK, USA, 27 June–2 July 2014; IEEE Computer Society: Washington, DC, USA, 2014; pp. 177–184.
  27. Liu, G.; He, D. An improved Ant Colony Algorithm based on dynamic weight of pheromone updating. In Proceedings of the Ninth International Conference on Natural Computation, ICNC 2013, Shenyang, China, 23–25 July 2013; Wang, H., Yuen, S.Y., Wang, L., Shao, L., Wang, X., Eds.; IEEE: Piscataway, NJ, USA, 2013; pp. 496–500.
  28. Lalbakhsh, P.; Zaeri, B.; Lalbakhsh, A. An Improved Model of Ant Colony Optimization Using a Novel Pheromone Update Strategy. IEICE Trans. Inf. Syst. 2013, E96.D, 2309–2318.
  29. Ivković, N.; Golub, M. A New Ant Colony Optimization Algorithm: Three Bound Ant System. In Swarm Intelligence, Proceedings of the 9th International Conference, ANTS 2014, Brussels, Belgium, 10–12 September 2014; Lecture Notes in Computer Science; Dorigo, M., Birattari, M., Garnier, S., Hamann, H., de Oca, M.A.M., Solnon, C., Stützle, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8667, pp. 280–281.
  30. Ivković, N. Ant Colony Algorithms for the Travelling Salesman Problem and the Quadratic Assignment Problem. In Swarm Intelligence—Volume 1: Principles, Current Algorithms and Methods; The Institution of Engineering and Technology: London, UK, 2018; pp. 409–442.
  31. Ivkovic, N.; Jakobovic, D.; Golub, M. Measuring Performance of Optimization Algorithms in Evolutionary Computation. Int. J. Mach. Learn. Comput. 2016, 6, 167–171.
  32. Ivković, N.; Kudelić, R.; Črepinšek, M. Probability and Certainty in the Performance of Evolutionary and Swarm Optimization Algorithms. Mathematics 2022, 10, 4364.
More
Video Production Service