Hybrid Evolutionary Approaches for Feature Selection: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , , , ,

The efficiency and the effectiveness of a machine learning (ML) model are greatly influenced by feature selection (FS), a crucial preprocessing step in machine learning that seeks out the ideal set of characteristics with the maximum accuracy possible. Due to their dominance over traditional optimization techniques, researchers are concentrating on a variety of metaheuristic (or evolutionary) algorithms and trying to suggest cutting-edge hybrid techniques to handle FS issues. The use of hybrid metaheuristic approaches for FS has thus been the subject of numerous research works.

  • metaheuristics
  • feature selection
  • hybridization

1. Introduction

Feature selection (FS) is a method that aims to choose the minimum required features that can represent a dataset by selecting those features that add the most to the estimation variable that falls within the user’s field of interest [1]. The volume of data available has risen significantly in recent years due to advancements in data gathering techniques in different fields, resulting in increased processing time and space complexity needed for the implementation of architectures in the realms of machine learning (ML). The collected data in many domains typically is of high dimensionality, making it impossible to select an optimum range of features and exclude unnecessary ones. The employed ML models are forced to learn insignificantly as a result of inappropriate features in the dataset, which leads to a poor recognition rate and a large drop in outcomes. By removing unnecessary and outdated features, FS reduces the dimensionality and improves the quality of the resulting attribute vector [2][3][4]. FS has been used for various purposes, including cancer classification (e.g., to improve the diagnosis of breast cancer and diabetes [5]), speech recognition [6], gene prediction [7], gait analysis [8], and text mining [9], etc.
FS has a pair of essential opposing goals, namely, reducing the number of needed characteristics and maximizing the performance of classification to overcome the curse of dimensionality. The three principal kinds of any FS strategy are filter, wrapper, and embedded methods, which integrate both filters and wrappers [10][11]. A filter technique is independent of any ML algorithm. It is appropriate for datasets containing fewer features, and it often requires low-performance computing capabilities. In filtering approaches, the association among classifiers and attributes is not considered, and thus filters often fail to detect the samples correctly during the learning process.
Many studies have used wrappers to address these problems. A wrapper technique frequently alters the training process and uses classifiers as assessment mechanisms. Thus, wrapper techniques for FS often affect the training algorithm and produce more precise results than filters. Wrappers put effort into training the employed ML algorithm by using only a subset of the features that are also needed for determining the training model performance. Depending on the selection accuracy determined in each preceding phase, a wrapper algorithm considers either adding or removing a feature from the selected number of features. As a result, wrapper methods are often more computationally complex and more expensive than most filtering techniques.
Conventional wrapper approaches [12] take a set of attributes and require the user to include arguments as parameters, after which the most informative attributes are chosen from a set of features proportional to the arguments provided by the user. The limitations of such techniques are that the selected feature vector is recursively evaluated, in which case certain characteristics are not included at the first level for assessment. In addition, arguments are specified by the user, and thus certain feature mixtures cannot be taken into account even with more precision. These issues may cause searching overhead along with overfitting. Evolutionary wrapper approaches, which are more common when the search area is very broad, have been created to address the drawbacks of classic wrapper methods. These approaches have many benefits over conventional wrapper methods, including the fact that they need fewer domain details. Evolutionary optimization techniques are population-based metaheuristic strategies that can solve a problem with multiple candidate solutions described by a group of individuals. Each entity in the FS tasks represents a part of the feature vector. An objective (target) function is employed to evaluate and assess the consistency of every candidate solution. The chosen individuals are exposed to the intervention of genetic operators in order to produce new entities that comprise the next generation [13].
A plethora of variations of metaheuristic methods has already been developed to support the FS tasks. When defining a metaheuristic approach, exploration and exploitation are two opposing aspects to take into account. In order to increase the effectiveness of these algorithms, it is essential to establish a good balance between these two aspects. This is because the algorithms perform well in some situations but poorly in others. Every nature-inspired approach has advantages and disadvantages of its own; hence it is not always practical to predict which algorithm is best for a given situation [14].

2. Hybrid Evolutionary Approaches for Feature Selection

It is unusual for all properties in a considered dataset to be useful when designing an ML platform in real life. The inclusion of unwanted and redundant attributes lessens the model’s classification capability and accuracy. As more factors are added to an ML framework, its complexity increases [15][16]. By finding and assembling the ideal set of features, FS in ML aims to produce useful models of a problem under study and consideration [17]. Some important advantages of FS are [10][12]:
  • reducing overfitting and eliminating redundant data,
  • improving accuracy and reducing misleading results, and
  • reducing the ML algorithm training time, dropping the algorithm complexity, and speeding up the training process.
The prime components of an FS process are presented in Figure 1 and they are [18] as follows.
Figure 1. Key factors of feature selection.
1. Searching Techniques: To obtain the best features with the highest accuracy, searching approaches are required to be applied in an FS process. Exhaustive search, heuristic search, and evolutionary computation are a few popular searching methods. An exhaustive search is explored in a few works [15][16]. Numerous heuristic strategies and greedy techniques, such as sequential forward selection (SFS) [19], and sequential backward selection (SBS), have therefore been used for FS [20]. However, in later parts of the FS process, it could be impossible to select or delete eliminated or selected features because both SFS and SBS suffer from the “nesting effect” problem. After being selected, features in the SFS method cannot be discarded later, while the features discarded in the SBS cannot be selected again. These two approaches can be compromised by using SFS l times and then applying SBS r times [21]. The nesting effect can be reduced by such a method, but the correct values of l and r must be determined carefully. Sequential backward and forward floating methods were presented to avoid this problem [19]. A two-layer cutting plane approach was recently suggested in [20] to evaluate the best subsets of characteristics. In [21], an exhaustive FS search with backtracking and a heuristic search was proposed.
Various EC approaches have been proposed in recent years to tackle the challenges of the FS problems successfully. Some of them are differential evolution (DE) [22], genetic algorithms (GAs) [23], grey wolf optimization (GWO) [24][25], ant colony optimization (ACO) [26][27][28], binary Harris hawks optimization (BHHO) [29][30] and improved BHHO (IBHHO) [31], binary ant lion optimization (BALO) [32][33], salp swarm algorithm (SSA) [34], dragon algorithm (DA) [35], multiverse algorithm (MVA) [36], Jaya optimization algorithms such as the FS based on the Jaya optimization algorithm (FSJaya) [37] and the FS based on the adaptive Jaya algorithm (AJA) [38], grasshopper swarm intelligence optimization algorithm (GOA) and its binary versions [39], binary teaching learning-based optimization (BTLBO) [40], harmony search (HS) [41], and the vortex search algorithm (VSA) [42], etc. All these techniques have been applied for performing FS on various types of datasets, and they have been demonstrated to achieve high optimization rates and to increase the CA. EC techniques require no domain knowledge and do not presume whether the training dataset is linearly separable or not. Another valuable aspect of EC methods is that their population-based process can deliver several solutions in one cycle. However, EC approaches often entail considerable computational costs because they typically include a wide range of assessments. The stability of an EC approach is also a critical concern, as the respective algorithms often pick different features from various rounds. Further research study is required as the growing number of characteristics in large-scale datasets also raises computational costs and decreases the consistency of EC algorithm application [13] in certain real-world FS activities. A high-level description of the most used EC algorithms is given below.
  • Genetic Algorithm (GA): A GA [43] is a metaheuristic influenced by natural selection that belongs to the larger class of evolutionary algorithms in computer science and operations research. GA relies on biologically inspired operators, such as mutation, crossover, and selection to develop high-quality solutions to optimization and search challenges. The GA is a mechanism that governs biological evolution and for tackling both constrained and unconstrained optimization issues. The GA adjusts a population of candidate solutions on a regular basis.
  • Particle Swarm Optimization (PSO): PSO is a bioinspired algorithm that is straightforward to use while looking for the best alternative in the solution space. It differs from other optimization techniques in that it simply requires the objective function and is unaffected by the gradient or any differential form of the objective. It also has a small number of hyperparameters. Kennedy and Eberhart proposed PSO in 1995 [44]. Sociobiologists think that a school of fish or a flock of birds moving in a group “may profit from the experience of all other members”, as stated in the original publication. In other words, while a bird is flying around looking for food at random, all of the birds in the flock can share what they find and assist the entire flock to get the best hunt possible.
  • Grey Wolf Optimizer (GWO): Mirjalili et al. [45] presented GWO as a new metaheuristic in 2014. The grey wolf’s social order and hunting mechanisms inspired the algorithm. First, there are four wolves, or degrees of the social hierarchy, to consider when creating GWO.–The α wolf: the solution having best fitness value;
    –the β wolf: the solution having second-best fitness value;
    –the δ wolf: the solution having third-best fitness value; and
    –the ω wolf: all other solutions.
    As a result, the algorithm’s hunting mechanism is guided by the first three appropriate wolves, α, β, and δ. The remaining wolves are regarded as ω and follow them. Grey wolves follow a set of well-defined steps during hunting: encircling, hunting, and attacking.
  • Harris Hawk Optimization (HHO): Heidari and his team introduced HHO as a new metaheuristic algorithm in 2019 [46]. HHO uses Harris hawk principles to investigate the prey, surprise pounce, and diverse assault techniques used by Harris hawks in the environment. Hawks reflect alternatives in HHO, whereas prey represents the best solution. The Harris hawks use their keen vision to follow the target and then conduct a surprise pounce to seize the prey they have spotted. In general, HHO is divided into two phases: exploitation and exploration. The HHO algorithm can be switched from exploration to exploitation, and the exploration behaviour can then be adjusted depending on the fleeing prey’s energy.
2. Criteria for Evaluation: The common evaluation criteria for wrapper FS techniques are the classification efficiency and effectiveness by using the selected attributes. Decision trees (DTs), support vector machines (SVMs), naive Bayes (NB), k-nearest neighbor (KNN), artificial neural networks (ANNs), and linear discriminant analysis (LDA) are just a few examples of common classifiers that have been used as wrappers in FS applications [47][48][49]. In the domain of filter approaches, measurements from a variety of disciplines have been incorporated, particularly information theory, correlation estimates, distance metrics, and consistency criteria [50]. Individual feature evaluation, relying on a particular aspect, is a basic filter approach in which only the best tier features are selected [47]. Relief [51] is a distinctive case in which a distance metric is applied to assess the significance of features. Filter methods are often computationally inexpensive, but they do not consider attribute relationships, which often leads to complicated problems in case of repetitive feature sets, such as in the case of microarray gene data, where the genes are intrinsically correlated [17][50]. To overcome these issues, it is necessary to perform proper filter measurements to choose a suitable subset of relevant features in order to evaluate the whole feature set. Wang et al. [52] recently published a distance measure to assess the difference between the chosen feature space and the space spanned by all features in order to locate a subset of features that approximates all features. Peng et al. [53] introduced the minimum redundancy maximum relevance (MRMR) approach based on shared information, and recommended measures were added to the EC because of their powerful exploration capability [54][55]. A unified selection approach was proposed by Mao and Tsang [20], which optimizes multivariate performance measures but also results in an enormous search area for high-dimensional data, a problem that requires strong heuristic search methods for finding the best output. There are several relatively straightforward statistical methods, such as t-testing, logistic regression (LR), hierarchical clustering, and classification and regression tree (CART), which can be applied jointly to produce better classification results [56]. Recently, authors of [57] have applied sparse LR for FS problems including millions of features. Min et al. [21] developed a rough principle procedure to solve FS tasks under budgetary and schedule constraints. Many experiments show that most filter mechanisms are inefficient for cases with vast numbers of features [58].

3. Number of Objectives: Single-objective (SO) optimization frameworks are techniques which combine the classifier’s accuracy and the features quantity into a single optimization function. On the contrary, multiobjective (MO) optimization approaches entail techniques designed to find and balance the tradeoffs among alternatives. In an SO situation, a solution’s superiority over other solutions is determined by comparing the resulting fitness values, while in an MO optimization, the dominance notion is employed to get the best results [59]. In particular, to determine the significance of the derived feature sets, in an MO situation, multiple criteria need to be optimized by considering different parameters. MO strategies thus may be used to solve some challenging problems involving multiple conflicting goals [60], and MO optimization comprises fitness functions that minimize or maximize multiple conflicting goals.

This entry is adapted from the peer-reviewed paper 10.3390/a16030167

References

  1. Piri, J.; Mohapatra, P.; Dey, R. Fetal Health Status Classification Using MOGA—CD Based Feature Selection Approach. In Proceedings of the IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2–4 July 2020; pp. 1–6.
  2. Bhattacharyya, T.; Chatterjee, B.; Singh, P.K.; Yoon, J.H.; Geem, Z.W.; Sarkar, R. Mayfly in Harmony: A New Hybrid Meta-Heuristic Feature Selection Algorithm. IEEE Access 2020, 8, 195929–195945.
  3. Piri, J.; Mohapatra, P. Exploring Fetal Health Status Using an Association Based Classification Approach. In Proceedings of the IEEE International Conference on Information Technology (ICIT), Bhubaneswar, India, 19–21 December 2019; pp. 166–171.
  4. Piri, J.; Mohapatra, P.; Acharya, B.; Gharehchopogh, F.S.; Gerogiannis, V.C.; Kanavos, A.; Manika, S. Feature Selection Using Artificial Gorilla Troop Optimization for Biomedical Data: A Case Analysis with COVID-19 Data. Mathematics 2022, 10, 2742.
  5. Jain, D.; Singh, V. Diagnosis of Breast Cancer and Diabetes using Hybrid Feature Selection Method. In Proceedings of the 5th International Conference on Parallel, Distributed and Grid Computing (PDGC), Solan, India, 20–22 December 2018; pp. 64–69.
  6. Mendiratta, S.; Turk, N.; Bansal, D. Automatic Speech Recognition using Optimal Selection of Features based on Hybrid ABC-PSO. In Proceedings of the IEEE International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 26–27 August 2016; Volume 2, pp. 1–7.
  7. Naik, A.; Kuppili, V.; Edla, D.R. Binary Dragonfly Algorithm and Fisher Score Based Hybrid Feature Selection Adopting a Novel Fitness Function Applied to Microarray Data. In Proceedings of the International IEEE Conference on Applied Machine Learning (ICAML), Bhubaneswar, India, 27–28 September 2019; pp. 40–43.
  8. Monica, K.M.; Parvathi, R. Hybrid FOW—A Novel Whale Optimized Firefly Feature Selector for Gait Analysis. Pers. Ubiquitous Comput. 2021, 1–13.
  9. Azmi, R.; Pishgoo, B.; Norozi, N.; Koohzadi, M.; Baesi, F. A Hybrid GA and SA Algorithms for Feature Selection in Recognition of Hand-printed Farsi Characters. In Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems, Xiamen, China, 29–31 October 2010; Volume 3, pp. 384–387.
  10. Al-Tashi, Q.; Abdulkadir, S.J.; Rais, H.M.; Mirjalili, S.; Alhussian, H. Approaches to Multi-Objective Feature Selection: A Systematic Literature Review. IEEE Access 2020, 8, 125076–125096.
  11. Brezočnik, L.; Fister, I.; Podgorelec, V. Swarm Intelligence Algorithms for Feature Selection: A Review. Appl. Sci. 2018, 8, 1521.
  12. Venkatesh, B.; Anuradha, J. A Review of Feature Selection and Its Methods. Cybern. Inf. Technol. 2019, 19, 3–26.
  13. Abd-Alsabour, N. A Review on Evolutionary Feature Selection. In Proceedings of the IEEE European Modelling Symposium, Pisa, Italy, 21–23 October 2014; pp. 20–26.
  14. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82.
  15. Blum, A.; Langley, P. Selection of Relevant Features and Examples in Machine Learning. Artif. Intell. 1997, 97, 245–271.
  16. Liu, H.; Motoda, H. Feature Selection for Knowledge Discovery and Data Mining; The Springer International Series in Engineering and Computer Science; Springer: Berlin/Heidelberg, Germany, 1998; Volume 454.
  17. Guyon, I.; Elisseeff, A. An Introduction to Variable and Feature Selection. J. Mach. Learn. Res. 2003, 3, 1157–1182.
  18. Piri, J.; Mohapatra, P.; Dey, R.; Panda, N. Role of Hybrid Evolutionary Approaches for Feature Selection in Classification: A Review. In Proceedings of the International Conference on Metaheuristics in Software Engineering and its Application, Marrakech, Morocco, 27–30 October 2022; pp. 92–103.
  19. Pudil, P.; Novovicová, J.; Kittler, J. Floating Search Methods in Feature Selection. Pattern Recognit. Lett. 1994, 15, 1119–1125.
  20. Mao, Q.; Tsang, I.W. A Feature Selection Method for Multivariate Performance Measures. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2051–2063.
  21. Min, F.; Hu, Q.; Zhu, W. Feature Selection with Test Cost Constraint. Int. J. Approx. Reason. 2014, 55, 167–179.
  22. Vivekanandan, T.; Iyengar, N.C.S.N. Optimal Feature Selection using a Modified Differential Evolution Algorithm and its Effectiveness for Prediction of Heart Disease. Comput. Biol. Med. 2017, 90, 125–136.
  23. Sahebi, G.; Movahedi, P.; Ebrahimi, M.; Pahikkala, T.; Plosila, J.; Tenhunen, H. GeFeS: A Generalized Wrapper Feature Selection Approach for Optimizing Classification Performance. Comput. Biol. Med. 2020, 125, 103974.
  24. Al-Tashi, Q.; Rais, H.; Jadid, S. Feature Selection Method Based on Grey Wolf Optimization for Coronary Artery Disease Classification. In Proceedings of the International Conference of Reliable Information and Communication Technology, Kuala Lumpur, Malaysia, 23–24 July 2018; pp. 257–266.
  25. Too, J.; Abdullah, A.R. Opposition based Competitive Grey Wolf Optimizer for EMG Feature Selection. Evol. Intell. 2021, 14, 1691–1705.
  26. Aghdam, M.H.; Ghasem-Aghaee, N.; Basiri, M.E. Text Feature Selection using Ant Colony Optimization. Expert Syst. Appl. 2009, 36, 6843–6853.
  27. Erguzel, T.T.; Tas, C.; Cebi, M. A Wrapper-based Approach for Feature Selection and Classification of Major Depressive Disorder-Bipolar Disorders. Comput. Biol. Med. 2015, 64, 127–137.
  28. Huang, H.; Xie, H.; Guo, J.; Chen, H. Ant Colony Optimization-based Feature Selection Method for Surface Electromyography Signals Classification. Comput. Biol. Med. 2012, 42, 30–38.
  29. Piri, J.; Mohapatra, P. An Analytical Study of Modified Multi-objective Harris Hawk Optimizer towards Medical Data Feature Selection. Comput. Biol. Med. 2021, 135, 104558.
  30. Too, J.; Abdullah, A.R.; Saad, N.M. A New Quadratic Binary Harris Hawk Optimization for Feature Selection. Electronics 2019, 8, 1130.
  31. Zhang, Y.; Liu, R.; Wang, X.; Chen, H.; Li, C. Boosted Binary Harris Hawks Optimizer and Feature Selection. Eng. Comput. 2021, 37, 3741–3770.
  32. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary Ant Lion Approaches for Feature Selection. Neurocomputing 2016, 213, 54–65.
  33. Piri, J.; Mohapatra, P.; Dey, R. Multi-objective Ant Lion Optimization Based Feature Retrieval Methodology for Investigation of Fetal Wellbeing. In Proceedings of the 3rd IEEE International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 21–23 September 2021; pp. 1732–1737.
  34. Hegazy, A.E.; Makhlouf, M.A.A.; El-Tawel, G.S. Improved Salp Swarm Algorithm for Feature Selection. J. King Saud Univ.-Comput. Inf. Sci. 2020, 32, 335–344.
  35. Mafarja, M.M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary Dragonfly Optimization for Feature Selection using Time-varying Transfer Functions. Knowl. Based Syst. 2018, 161, 185–204.
  36. Sreejith, S.; Nehemiah, H.K.; Kannan, A. Clinical Data Classification using an Enhanced SMOTE and Chaotic Evolutionary Feature Selection. Comput. Biol. Med. 2020, 126, 103991.
  37. Das, H.; Naik, B.; Behera, H.S. A Jaya Algorithm based Wrapper Method for Optimal Feature Selection in Supervised Classification. J. King Saud Univ.-Comput. Inf. Sci. 2020, 34, 3851–3863.
  38. Tiwari, V.; Jain, S.C. An Optimal Feature Selection Method for Histopathology Tissue Image Classification using Adaptive Jaya Algorithm. Evol. Intell. 2021, 14, 1279–1292.
  39. Haouassi, H.; Merah, E.; Rafik, M.; Messaoud, M.T.; Chouhal, O. A new Binary Grasshopper Optimization Algorithm for Feature Selection Problem. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 316–328.
  40. Mohan, A.; Nandhini, M. Optimal Feature Selection using Binary Teaching Learning based Optimization Algorithm. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 329–341.
  41. Dash, R. An Adaptive Harmony Search Approach for Gene Selection and Classification of High Dimensional Medical Data. J. King Saud Univ.-Comput. Inf. Sci. 2021, 33, 195–207.
  42. Gharehchopogh, F.S.; Maleki, I.; Dizaji, Z.A. Chaotic Vortex Search Algorithm: Metaheuristic Algorithm for Feature Selection. Evol. Intell. 2022, 15, 1777–1808.
  43. Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: Cambridge, MA, USA, 1998.
  44. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks (ICNN), Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948.
  45. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61.
  46. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872.
  47. Liu, H.; Zhao, Z. Manipulating Data and Dimension Reduction Methods: Feature Selection. In Encyclopedia of Complexity and Systems Science; Springer: Berlin/Heidelberg, Germany, 2009; pp. 5348–5359.
  48. Liu, H.; Motoda, H.; Setiono, R.; Zhao, Z. Feature Selection: An Ever Evolving Frontier in Data Mining. In Proceedings of the 4th International Workshop on Feature Selection in Data Mining (FSDM), Hyderabad, India, 21 June 2010; Volume 10, pp. 4–13.
  49. Xue, B.; Zhang, M.; Browne, W.N. Particle Swarm Optimization for Feature Selection in Classification: A Multi-Objective Approach. IEEE Trans. Cybern. 2013, 43, 1656–1671.
  50. Dash, M.; Liu, H. Feature Selection for Classification. Intell. Data Anal. 1997, 1, 131–156.
  51. Kira, K.; Rendell, L.A. A Practical Approach to Feature Selection. In Proceedings of the 9th International Workshop on Machine Learning (ML), San Francisco, CA, USA, 1–3 July 1992; Morgan Kaufmann: Burlington, MA, USA, 1992; pp. 249–256.
  52. Wang, S.; Pedrycz, W.; Zhu, Q.; Zhu, W. Subspace learning for unsupervised feature selection via matrix factorization. Pattern Recognit. 2015, 48, 10–19.
  53. Peng, H.; Long, F.; Ding, C.H.Q. Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238.
  54. Cervante, L.; Xue, B.; Zhang, M.; Shang, L. Binary Particle Swarm Optimisation for Feature Selection: A Filter based Approach. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Brisbane, Australia, 10–15 June 2012; pp. 1–8.
  55. Ünler, A.; Murat, A.E.; Chinnam, R.B. mr2PSO: A Maximum Relevance Minimum Redundancy Feature Selection Method based on Swarm Intelligence for Support Vector Machine Classification. Inf. Sci. 2011, 181, 4625–4641.
  56. Tan, N.C.; Fisher, W.G.; Rosenblatt, K.P.; Garner, H.R. Application of Multiple Statistical Tests to Enhance Mass Spectrometry-based Biomarker Discovery. BMC Bioinform. 2009, 10, 144.
  57. Tan, M.; Tsang, I.W.; Wang, L. Minimax Sparse Logistic Regression for Very High-Dimensional Feature Selection. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1609–1622.
  58. Zhai, Y.; Ong, Y.; Tsang, I.W. The Emerging “Big Dimensionality”. IEEE Comput. Intell. Mag. 2014, 9, 14–26.
  59. Thiele, L.; Miettinen, K.; Korhonen, P.J.; Luque, J.M. A Preference-Based Evolutionary Algorithm for Multi-Objective Optimization. Evol. Comput. 2009, 17, 411–436.
  60. Bui, L.T.; Alam, S. Multi-Objective Optimization in Computational Intelligence: Theory and Practice; IGI Global: Hershey, PA, USA, 2008.
More
This entry is offline, you can click here to edit this entry!
Video Production Service