Facility Location Problem: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

The facility location problem (FLP) is a complex optimization problem that has been widely researched and applied in industry.

  • optimization
  • facility location problems
  • genetic algorithms

1. Introduction

Given the ubiquity of computationally hard, complex, or difficult problems, using human insight and intuition allows researchers to improve the traditional algorithmic methods [1][2]. Researchers applied this concept to the optimization problem known as the facility location problem (FLP), utilizing video games as the means to gather human input on instances of it.
The FLP consists of, given a set of demand centers and potential locations for opening facilities, choosing a subset of the potential locations. Opening each facility has a cost associated, and servicing demand centers from faraway facilities is costly as well. Strategies may vary from having few facilities that service distant demand centers to having numerous facilities that service demand centers from short distances. The subset must be chosen to minimize the total cost. This problem has been proven to be NP-hard [3].
The utilization of crowd computing techniques is employed as a method to gather large amounts of human input for problem instances. Crowd computing is defined as a strategy that allows a collective group of individuals, rather than solely computers, to perform productive computations and aggregate the results in order to solve a problem. A comprehensive discussion about the understanding of crowdsourcing in science was proposed by Lenart-Gansiniec et al. [4] since it is a topic that is relevant for generating scientific knowledge and has been used to solve problems for business, the public, and non-governmental sectors.
The FLP is a common issue that frequently occurs in the fields of logistics and operations research. Despite its ubiquitousness, the complexity of the problem often renders obtaining optimal solutions for substantial instances within practical time limits challenging. For this reason, these problems are often tackled with heuristics [5], meta-heuristics, or stochastic approaches [6]. In particular, problems where a robust solution is needed—one that can endure a disruption and remain efficient—have been approached with genetic algorithms [7][8]. The aim of this study is to demonstrate the feasibility of incorporating human intuition and insight into solving complex problems such as the FLP, which frequently arise in logistics and operations research. The goal is to reduce transportation costs, enhance robustness in FLP, and allocate resources effectively. However, the task of acquiring knowledge from human participants is challenging due to the difficulty in articulating the thought processes that lead to a conclusion. People may have an intuitive sense of the solution to a particular problem instance, but their thought processes are not easily expressed in a concrete algorithm or heuristic. To address this challenge, a video game was employed to gather potential solutions from human participants. This game presents participants with multiple instances of optimization problems, aiming to aggregate their strategies into a unified algorithm or heuristic.

2. Human-Based Computation

In the literature, previous research has been conducted to address the integration of human and computer interaction in solving various optimization problems. For instance, the capacitated vehicle routing with time windows problem has been explored through the use of an interactive graphical interface [2]. Additionally, studies have been conducted to evaluate the effectiveness of combining the strengths of both the human and computer participants [9]. Results have shown that a combination of human and computer agents in teams is more effective than a team of only humans in military command and control tasks [10]. Interactive genetic algorithms have been applied to information retrieval problems [11] and in software design [12].
Compared to the interface-based approach, using a video game has several advantages. Implementing the interface-based approach requires the user’s training, which acts as the guide and operator of the algorithm. This requirement limits the pool of potential subjects. On the other hand, a video game, with its level-based structure, eliminates the need for training and reduces the cognitive load on the player. The player’s sole focus is on progressing through the levels of the game without the added responsibility of directing the algorithm.

3. Crowd Computing

Crowd computing, also known as citizen science, is a novel approach that leverages the abilities of ordinary individuals to solve diverse problems. This concept is characterized by the gathering of vast amounts of data, their analysis, and the processing of information to identify effective solutions to these problems [13][14][15]. Crowd computing is similar to cloud computing, as it seeks to be an accessible, dynamic, and available computational resource. However, it differs from cloud computing in that it utilizes human cognitive abilities rather than central processing units to carry out its computations.
Researchers acknowledge that crowd computing and cloud computing are two different paradigms for solving optimization problems. Although both approaches utilize a large number of computers for computation, they have fundamental differences. Crowd computing involves leveraging human intelligence to solve complex problems that cannot be addressed by computers alone. This approach requires the recruitment of a large number of individuals who perform small tasks that are then combined to solve a larger problem. Crowd computing is typically used for tasks that require human interpretation and processing of data, such as image labeling, data entry, and transcription. In contrast, cloud computing involves the use of a network of computers to provide on-demand computing resources for data storage, data processing, and software development. The data and applications are stored on remote servers, which can be accessed over the Internet. This approach is suitable for tasks that require large-scale computation and data processing.
The motivation of a large number of participants to solve complex problems is often incentivized through the provision of monetary rewards. However, negotiating appropriate compensation, promoting collaboration among participants, and effectively managing communication between workers and organizers can pose challenges [16]. For instance, mobile crowd-sensing applications have faced reluctance from participants due to perceived high energy and bandwidth consumption costs [17].
Aside from monetary compensation, entertainment can also motivate individuals to participate in crowd computing. Problem-solving tasks, particularly in the form of puzzle games, can be intellectually challenging and provide a sense of satisfaction upon completion. Studies have demonstrated that incorporating puzzle elements into computer-based tasks can engage young people more productively with technology [18].
It is proposed that framing complex computational problems as engaging puzzle games can result in developing a crowd-computing platform capable of massive data acquisition. The motivation for participation in such games is often entertainment, and several examples of popular crowdsourcing games exist, such as Trivia Crack, King of Thieves, and Super Mario Maker by Nintendo. These games often induce player engagement through social pressure to try the game, as evidenced by their commercial success.
It has been observed that various phenomena can be conceptualized as implementations of the principles of crowd computing, including the popularity of social news websites such as Reddit and Digg [19], the analysis of semantic meaning derived from crowdsourced tags assigned to books [20], and the influence that audience participation has on the creation, dissemination, curation, and financing of music [21]. Crowds can be utilized for both formulating features and as sources of data for machine science applications [22].
The quality of contributions by participants in crowd computing poses a challenge, particularly when untrained individuals generate data. To determine the most appropriate data to be taken into consideration, a ranking system based on weights can be implemented, where the data submitted by participants with the highest classification are assigned greater weights. Alternatively, a voting system can be established, where a group of scientists assigns scores to the data. This approach generates profiles of the skills of each scientist based on dimensions such as accuracy, completeness, reputation, relevance, and others [23]. The trustworthiness of a participant’s rating can be established by evaluating the objectivity of participants and the degree of consensus among them [24]. The integration of conflicting preferences of participants in solving problems can be facilitated through the use of social choice theory, enabling the selection of a single solution from a set of candidates [25].
In the context of control systems with significant safety considerations, such as the operation of heavy machinery, physiological data can be utilized to estimate the operator’s mental workload, thereby providing insights into the operator’s mental state during the task of collaboration with a computer [26]. This information can be used to determine if a task is overly simple or excessively challenging for a particular participant.
Recent works have developed new ways of solving allocation problems with crowdsourcing and gamification techniques. For instance, Allahbakhsh et al. [27] presented a solution to solve the p-median problems using crowdsourcing and gamification techniques. They developed a crowdsourced game called SolveIt, which employs the wisdom and intelligence of the crowd to solve location allocation problems. SolveIt uses the attention technique by showing the winner’s name on the scoreboard and a competition technique to increase players’ motivation. They proposed a graph data model to represent the location allocation problems. They asked a group of 40 students at the University of Zabol to participate in the game and solve the proposed problems. The contributions received from the crowd were compared to those obtained from a specific implementation of genetic algorithms, which was used as the gold standard.
Jiang et al. [28] focused on the problem of multiple cooperative task allocation (MCTA). They used real-life relationships among users on the social network and proposed group-oriented cooperative crowd-sensing to solve the MCTA problem. They covered the solutions via group-oriented cooperation with three phases while achieving a good task cooperation quality. In phase 1, they selected a subset of users on the social network as initial leaders and directly pushed sensing tasks to them. For phase 2, they used the leaders to search for their socially connected users to model groups. Phase 3 presented the process of group-oriented task allocation for solving the MCTA problem.
Moreover, a crowdsourcing strategy and the quantum have also been utilized. For instance, Minghui Xu et al. [29] used quantum crowdsourcing schemes, in which the welfare of the requestor or worker can be maximized because quantum players share the extended strategy space and the addition of entanglement offers a new method of depicting fine-grained relationships between players. To address problems of task allocation, they presented a quantum game model for quota-oriented crowdsourcing games.

4. Games for Solving Problems

Games with a purpose have been developed to utilize individuals’ problem-solving skills for addressing computationally challenging problems [30]. These games focus on various domains, such as biology, biochemistry, and linguistics. An example of such a game is BioGames [31], which is an image-matching game that trains its players to recognize malaria-infected red blood cells. This not only provides an engaging training program for student medical personnel but also serves as a means for crowdsourcing labeling data to train machine learning algorithms. The results of BioGames demonstrate the potential of a crowd of untrained individuals to achieve a disease diagnosis with an accuracy comparable to that of an expert.
The game Phylo [14] leverages the concept of color matching to facilitate the optimization of nucleotide sequence alignment, thus minimizing the number of mutations needed to produce a different species from an ancestor. This process enables the generation of phylogenetic trees, thereby providing deeper insights into the evolutionary relationships between species with sequenced DNA. The problem of multiple sequence alignment has been shown to be NP-complete [32]; however, Phylo serves as an example of how such problems can be transformed into engaging games.
Foldit [33] and EteRNA are two games designed to facilitate the discovery of protein folding and RNA folding mechanisms, respectively. After acquiring a basic understanding of the rules, non-expert players can predict the folding of complex molecules. The players of EteRNA vote on each other’s predictions, which are later synthesized and evaluated for accuracy. Foldit allows players to automate their common sequences of actions through a feature called recipes, and through observing these recipes, new strategies have been discovered that improve the performance of existing algorithms [13]. This demonstrates the potential of crowd computing to uncover new algorithms and heuristics for complex problems. Foldit further encodes and conveys the insights gained by players by providing them with a scripting language.
EyeWire [34][35] is a game aimed at mapping the outlines of neurons in 3D space from 2D images obtained from a retina. The player is tasked with tracing the path of a neuron in 3D space by following the edges of the 2D images. The game complements algorithmic approaches by directing the player’s focus to areas where the algorithm is uncertain, thereby facilitating the generation of complete maps of neurons and their connections from images. EyeWire represents a successful example of human–computer cooperation in solving a problem, utilizing the strengths of each in areas where the other is ineffective.
Google Image Labeler (GIL), which is based on the game called ESP [36], employs a strategy of using human players to label images obtained from the web. The gameplay involves two players, who are unable to communicate with each other, attempting to propose matching labels for an image. When both players agree on the same label, it is then utilized as metadata to enhance the image search service of the company. Similar to GIL’s goals and gameplay, TagATune [37][38] aimed at obtaining metadata for audio files, such as the genre, type of instruments being played, and gender of the vocalists. Both GIL and TagATune incorporate a multiplayer component to increase their entertainment value and attract more players, which also serves as a validation mechanism. With the players incentivized to agree on a label but unable to communicate except through the game’s mechanics, poor solutions can be easily detected and disregarded, whether submitted intentionally or not.

This entry is adapted from the peer-reviewed paper 10.3390/app13084884

References

  1. Takagi, H. Interactive evolutionary computation: Fusion of the capabilities of EC optimization and human evaluation. Proc. IEEE 2001, 89, 1275–1296.
  2. Anderson, D.; Anderson, E.; Lesh, N.; Marks, J.; Mirtich, B.; Ratajczak, D.; Ryall, K. Human-guided simple search. In Proceedings of the AAAI/IAAI, Austin, TX, USA, 30 July–3 August 2000; pp. 209–216.
  3. Megiddo, N.; Tamir, A. On the complexity of locating linear facilities in the plane. Oper. Res. Lett. 1982, 1, 194–197.
  4. Lenart-Gansiniec, R.; Czakon, W.; Sułkowski, Ł.; Pocek, J. Understanding crowdsourcing in science. Rev. Manag. Sci. 2022, 1–34.
  5. Graham, R.L. Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math. 1969, 17, 416–429.
  6. Pinedo, M.; Schrage, L. Stochastic shop scheduling: A survey. In Deterministic and Stochastic Scheduling; Springer: Berlin/Heidelberg, Germany, 1982; pp. 181–196.
  7. Hernandez, I.; Ramirez-Marquez, J.E.; Rainwater, C.; Pohl, E.; Medal, H. Robust facility location: Hedging against failures. Reliab. Eng. Syst. Saf. 2014, 123, 73–80.
  8. Chen, C.H.; Chou, J.H. Multiobjective optimization of airline crew roster recovery problems under disruption conditions. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 133–144.
  9. Scott, S.D.; Lesh, N.; Klau, G.W. Investigating human-computer optimization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Minneapolis, MN, USA, 20–25 April 2002; pp. 155–162.
  10. Fan, X.; McNeese, M.; Sun, B.; Hanratty, T.; Allender, L.; Yen, J. Human–agent collaboration for time-stressed multicontext decision making. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2010, 40, 306–320.
  11. Cho, S.B.; Lee, J.Y. A human-oriented image retrieval system using interactive genetic algorithm. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2002, 32, 452–458.
  12. Simons, C.L.; Parmee, I.C. Elegant object-oriented software design via interactive, evolutionary computation. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2012, 42, 1797–1805.
  13. Khatib, F.; Cooper, S.; Tyka, M.D.; Xu, K.; Makedon, I.; Popović, Z.; Baker, D. Algorithm discovery by protein folding game players. Proc. Natl. Acad. Sci. USA 2011, 108, 18949–18953.
  14. Kawrykow, A.; Roumanis, G.; Kam, A.; Kwak, D.; Leung, C.; Wu, C.; Zarour, E.; Sarmenta, L.; Blanchette, M.; Waldispühl, J.; et al. Phylo: A citizen science approach for improving multiple sequence alignment. PLoS ONE 2012, 7, e31362.
  15. Muhammadi, J.; Rabiee, H.R. Crowd computing: A survey. arXiv 2013, arXiv:1301.2774.
  16. Wang, W.; Jiang, J.; An, B.; Jiang, Y.; Chen, B. Toward efficient team formation for crowdsourcing in noncooperative social networks. IEEE Trans. Cybern. 2017, 47, 4208–4222.
  17. Wang, L.; Zhang, D.; Yan, Z.; Xiong, H.; Xie, B. effSense: A novel mobile crowd-sensing framework for energy-efficient and cost-effective data uploading. IEEE Trans. Syst. Man Cybern. Syst. 2015, 45, 1549–1563.
  18. Gorriz, C.M.; Medina, C. Engaging girls with computers through software games. Commun. ACM 2000, 43, 42.
  19. Schneider, D.; de Souza, J.; Lucas, E.M. Towards a typology of social news apps from a Crowd Computing perspective. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 1134–1140.
  20. Agreste, S.; De Meo, P.; Ferrara, E.; Piccolo, S.; Provetti, A. Analysis of a heterogeneous social network of humans and cultural objects. IEEE Trans. Syst. Man Cybern. Syst. 2015, 45, 559–570.
  21. Gomes, C.; Schneider, D.; Moraes, K.; De Souza, J. Crowdsourcing for music: Survey and taxonomy. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; pp. 832–839.
  22. Bongard, J.C.; Hines, P.D.; Conger, D.; Hurd, P.; Lu, Z. Crowdsourcing predictors of behavioral outcomes. IEEE Trans. Syst. Man Cybern. Syst. 2013, 43, 176–185.
  23. Antelio, M.; Esteves, M.G.P.; Schneider, D.; de Souza, J.M. Qualitocracy: A data quality collaborative framework applied to citizen science. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; pp. 931–936.
  24. Oh, H.K.; Kim, S.W.; Park, S.; Zhou, M. Can you trust online ratings? A mutual reinforcement model for trustworthy online rating systems. IEEE Trans. Syst. Man Cybern. Syst. 2015, 45, 1564–1576.
  25. Gören, S.; Baccouche, A.; Pierreval, H. A framework to incorporate decision-maker preferences into simulation optimization to support collaborative design. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 229–237.
  26. Zhang, J.; Yin, Z.; Wang, R. Recognition of mental workload levels under complex human–machine collaboration by using physiological features and adaptive support vector machines. IEEE Trans. Hum. Mach. Syst. 2015, 45, 200–214.
  27. Allahbakhsh, M.; Arbabi, S.; Galavii, M.; Daniel, F.; Benatallah, B. Crowdsourcing planar facility location allocation problems. Computing 2019, 101, 237–261.
  28. Jiang, J.; An, B.; Jiang, Y.; Zhang, C.; Bu, Z.; Cao, J. Group-Oriented Task Allocation for Crowdsourcing in Social Networks. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 4417–4432.
  29. Xu, M.; Wang, S.; Hu, Q.; Sheng, H.; Cheng, X. Quantum analysis on task allocation and quality control for crowdsourcing with homogeneous workers. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2830–2839.
  30. Von Ahn, L. Games with a purpose. Computer 2006, 39, 92–94.
  31. Mavandadi, S.; Feng, S.; Yu, F.; Dimitrov, S.; Yu, R.; Ozcan, A. BioGames: A platform for crowd-sourced biomedical image analysis and telediagnosis. Games Health Res. Dev. Clin. Appl. 2012, 1, 373–376.
  32. Wang, L.; Jiang, T. On the complexity of multiple sequence alignment. J. Comput. Biol. 1994, 1, 337–348.
  33. Cooper, S.; Khatib, F.; Treuille, A.; Barbero, J.; Lee, J.; Beenen, M.; Leaver-Fay, A.; Baker, D.; Popović, Z.; Foldit players. Predicting protein structures with a multiplayer online game. Nature 2010, 466, 756.
  34. Kim, J.S.; Greene, M.J.; Zlateski, A.; Lee, K.; Richardson, M.; Turaga, S.C.; Purcaro, M.; Balkam, M.; Robinson, A.; Behabadi, B.F.; et al. Space–time wiring specificity supports direction selectivity in the retina. Nature 2014, 509, 331.
  35. Marx, V. Neuroscience waves to the crowd. Nat. Methods 2013, 10, 1069–1074.
  36. Von Ahn, L. Human computation. In Proceedings of the 2008 IEEE 24th International Conference on Data Engineering, Vancouver, BC, Canada, 7–12 May 2008; pp. 1–2.
  37. Law, E.L.; Von Ahn, L.; Dannenberg, R.B.; Crawford, M. TagATune: A Game for Music and Sound Annotation. In Proceedings of the 8th International Conference on Music Information Retrieval, Vienna, Austria, 23–27 September 2007; Volume 3, p. 2.
  38. Law, E.; West, K.; Mandel, M.I.; Bay, M.; Downie, J.S. Evaluation of Algorithms Using Games: The Case of Music Tagging. In ISMIR; Austrian Computer Society: Wien, Austria, 2009; pp. 387–392.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations