Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3403 2024-02-28 11:07:39 |
2 layout & references Meta information modification 3403 2024-02-29 01:28:54 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Liu, S.; Yang, C. Machine Learning Design for High-Entropy Alloys. Encyclopedia. Available online: https://encyclopedia.pub/entry/55654 (accessed on 15 April 2024).
Liu S, Yang C. Machine Learning Design for High-Entropy Alloys. Encyclopedia. Available at: https://encyclopedia.pub/entry/55654. Accessed April 15, 2024.
Liu, Sijia, Chao Yang. "Machine Learning Design for High-Entropy Alloys" Encyclopedia, https://encyclopedia.pub/entry/55654 (accessed April 15, 2024).
Liu, S., & Yang, C. (2024, February 28). Machine Learning Design for High-Entropy Alloys. In Encyclopedia. https://encyclopedia.pub/entry/55654
Liu, Sijia and Chao Yang. "Machine Learning Design for High-Entropy Alloys." Encyclopedia. Web. 28 February, 2024.
Machine Learning Design for High-Entropy Alloys
Edit

High-entropy alloys (HEAs) have attracted worldwide interest due to their excellent properties and vast compositional space for design. However, obtaining HEAs with low density and high properties through experimental trial-and-error methods results in low efficiency and high costs. Although high-throughput calculation (HTC) improves the design efficiency of HEAs, the accuracy of prediction is limited owing to the indirect correlation between the theoretical calculation values and performances. Machine learning (ML) from real data has attracted increasing attention to assist in material design, which is closely related to performance. 

artificial intelligent design high-entropy alloy machine learning

1. Introduction

The concept of high-entropy alloys (HEAs) has been raised by Cantor [1] and Yeh [2] since 2004. HEAs usually consist of four or five elements with atomic percentages (at.%) that are equal or nearly equal. Usually, the atomic fraction of each component is greater than five percent [3]. Their configurational entropy of mixing is high which is beneficial for the formation of the solid-solution phase [4]. They mainly possess Face-Centered Cubic (FCC), Body-Centered Cubic (BCC), and Hexagonal Close-Packed (HCP) structures [5]. Unlike conventional alloys, the complex compositions of HEAs lead to exceptional effects. HEAs usually exhibit outstanding physical and chemical properties, i.e., high mechanical properties, superior fatigue and wear resistance, good ferromagnetic and superparamagnetic properties, and excellent irradiation and corrosion resistance, etc. [6][7][8][9][10]. Using optimized composition design, a lighter density and better performance of HEAs can be obtained to achieve the purpose of lightweight HEAs [7]. However, due to the flexible compositions and ample performance tuning space, obtaining HEAs with low density and high properties solely through experimental trial-and-error methods requires a substantial investment of time and labor, resulting in low efficiency and high costs.
In recent years, the use of a computer-assisted design method has made significant progress in the field of HEAs. High-throughput calculation (HTC) is one promising computer-assisted design method, which is characterized by concurrent calculations and an automated workflow, enabling efficient computations for tasks at a high scale, rather than sequentially processing multiple tasks [11]. It initially focuses on the quantum scale, effectively meeting the demand for expediting the discovery of new materials and exceptional performance. In recent years, the concept of HTC has been applied to micro-thermodynamic scales, becoming a rapid method for obtaining phase information in metal structural materials [12]. High-throughput first-principles calculations and thermodynamics calculations are two main technologies of the HTC methods. High-throughput first-principles calculations, which do not rely on empirical parameters, can predict material property data by inputting element types and atomic coordinates [13]. They play an indispensable role in understanding and designing target materials from a microscopic perspective and enable the quantitative prediction of composition optimization, phase composition, and the structure–property relationship of materials. High-throughput first-principles calculations demonstrate specific roles and advantages for three aspects of HEAs: (1) the accurate construction of long-range disordered and short-range ordered structures; (2) the precise prediction of the stability of HEA phases; (3) the accurate calculation of the mechanical properties of HEAs [14]. The process of screening HEAs based on high-throughput thermodynamics calculations combines equilibrium calculations with non-equilibrium Scheil solidification calculations [15]. Using high-throughput calculation to predict the melting point, phase composition, and thermodynamic properties of HEAs after processing, it rapidly obtains the alloy composition space that satisfies criteria such as the melting point and phase volume fraction [16]. This assists in the quick analysis of effective alloy compositions, reducing the frequency of experimental trial and error. High-throughput thermodynamics calculations demonstrate specific functions and advantages in three ways: (1) the accurate acquisition of the phase diagrams and thermodynamic properties of HEAs; (2) the rapid retrieval of key microstructural parameters for HEAs; (3) the implementation of cross-scale analysis [12]. However, HTC technology mainly uses the theoretical calculation values such as phases, melting points, and various energies as data sources. Although the amount of data used in HTC calculation is huge, the direct correlation with the performance of HEAs and the accuracy of performance prediction are far from satisfactory. Therefore, the current HTC method can only be used as a reference criterion for HEA design. A certain number of experiments are still needed to verify the accuracy of the HTC design results.
In the past decade, the rapid ascent of artificial intelligence (AI) has brought a transformative revolution [17]. This revolution has not only fundamentally reshaped various domains of computer science, including computer vision and natural language processing, but has also made a significant impact on numerous scientific fields, including materials science. AI success comes from its ability to comprehend complicated patterns, and these complicated AI models and algorithms can be systematically refined through learning from real data, which is closely related to performance [18]. This capability is further enhanced by the availability of computational resources, efficient algorithms, and substantial data collected from experiments or simulations. The exponential increase in relevant publications is indicative of this trend. In essence, with a sufficiently large dataset of high quality, AI can effectively capture the intricate atomic interactions through standard procedures of training, validation, and testing [19]. Additionally, AI models and algorithms can identify non-linear structure–property relationships, which are challenging to determine through human observation [20]. These attributes position AI as an effective tool to tackle the challenges associated with the theoretical modeling of materials [21]. Machine learning (ML) is one of the most important technologies used for the AI design of materials [22]. This method, based on comprehensive experimental and theoretical studies, enables rapid data mining, revealing underlying information and patterns, and accurately predicting material properties for target material selection [23]. However, a small number of datasets becomes a key issue in HEA design, leading to high requirements for accuracy and the generalization ability of ML models and algorithms. 

2. Machine Learning (ML) in HEA Design

ML is a multidisciplinary field involving probability theory, statistics, approximation theory, and algorithmic complexity theory [24]. The concept of ML, first introduced by Samuel in 1959, has evolved into a cross-disciplinary field spanning computer science, statistics, and other fields. Due to its efficient computational and predictive capabilities, ML has been gradually applied in materials science research [25]. In recent years, ML has gained widespread attention and demonstrated outstanding capabilities in the development of new materials and the prediction of material properties in the field of materials science [26]. A notable example is the 2016 article published in Nature, titled “Machine-learning-assisted materials discovery using failed experiments”, which successfully predicted chemical reactions and the formation of new compounds by mining a large dataset of failed experimental data, further fueling the momentum of research related to the application of ML to materials [27].
With an in-depth understanding of the concept of materials engineering, ML has found extensive applications in the design, screening, and performance prediction and optimization of materials [28]. Data-driven methods significantly expedite the research and development process, reducing time and computational costs. Whether on the micro or macro scale, this approach can be applied to new material discovery and the prediction of material properties in the field of materials science.
The discussion on the rules of phase formation has always accompanied the research on HEAs. The role of phases has been crucial in the design of HEAs. In the design strategy of HEAs, predicting the composition and phase stability of unknown alloy components is an essential aspect [29]. Widely used descriptors for phase prediction include entropy of mixing, enthalpy of mixing, elastic constants, melting temperature, valence electron concentration, electronegativity, etc. [30]. As research advances, the development involves utilizing differentiating alloy elemental contents as inputs or various combinations of the intrinsic properties of monatomic elements such as their physical and mechanical features. Examples of these features include atomic radius difference, valence electron count, configuration entropy, mixing enthalpy, etc. By directly modeling, relationships between the element combinations and phase formation can be obtained.
Besides phase formation, exploring the relationship between the compositions and properties of HEAs is also an essential task. By establishing a correlation model between feature parameters and properties such as strength, it is possible to achieve the rapid prediction of material performance based on chemical composition. This method, supplemented by a substantial amount of experimental data, offers valuable guidance for alloy composition design.

3. Common ML Models and Algorithms in HEA Design

So far, commonly used ML models and algorithms in HEA design include neural networks (NNs) [31][32][33][34][35][36][37][38][39][40][41][42][43], support vector machine (SVM) [44][45][46][47][48][49][50][51][52][53][54], Gaussian process (GP) [36][55][56][57][58][59][60][61], k-nearest neighbors (KNN) [62][63][64][65][66], and random forests (RFs) models and algorithms [67][68] etc.

3.1. Neural Networks (NNs)

NNs are computational models and algorithms inspired by the structure of the human brain [69]. The basic units of a NN are neurons, which simulate the connections and information transmission between biological neurons [70]. They are organized into layers such as the input layer, hidden layer, and output layer [71]. Each neuron receives inputs from neurons in the previous layer, applies weights to these inputs, and then produces an output through an activation function [72]. This output serves as the input for neurons in the next layer. By adjusting the weights of connections, the NNs can learn and adapt to patterns in the input data, enabling it to perform tasks such as learning and prediction. NNs have achieved significant success in areas such as image recognition, natural language processing, and speech recognition, demonstrating powerful performance in various applications. However, the data requirement of NN models and algorithms is huge.
J. Wang et al. [73] developed ensemble NN models and algorithms to test input data in order to design HEAs with a higher yield strength (YS) and ultimate tensile strength (UTS). They collected 501 data points from previous studies to be used for NN model training and validation. The data include the chemical composition, process conditions, and tensile mechanical properties of HEAs. The basic models and algorithms included in their experiment are the simple deep neural network (DNN) and the concatenated DNN and conventional neural network (CNN) model. They performed an inverse prediction and selected a random search and designed two HEAs, HEA1 and HEA2. The results are measured using multiple tensile tests. The results show that the combinations of the UTS and total elongation (T.EL.) of the present HEAs are better than the input data, as well as the HEAs from previous ML studies. This research demonstrated the effectiveness of the model in HEA design. This alloy design approach, specialized in finding multiple local optima, could help researchers design an infinite number of new alloys with interesting properties.

3.2. Support Vector Machine (SVM) Algorithm

In the ML model, the SVM algorithm has a good decision boundary [74]. The SVM algorithm was introduced by VaPnik as a supervised learning method falling under the category of binary classification models [75]. Its primary objective is to identify a separation hyperplane in the feature space that maximizes the margin, ensuring the correct classification of samples. This process is eventually transformed into a convex optimization problem. The segmentation principle of the SVM algorithm revolves around maximizing the interval. The SVM algorithm demonstrates significant advantages in addressing non-linear, high-dimensional, and small-sample problems [76]. Originally applied to linear classification, the SVM algorithm was later extended to handle non-linear examples and was further adapted for high-dimensional spaces [77]. It can also solve the problem of overfitting.
In the research of W. Zhang et al. [78], to elucidate the varying prediction accuracies of a singular characteristic parameter within amorphous alloys (AM), solid-solution alloys (SS), and high-entropy alloys containing intermetallic compounds (IM), three approaches were introduced. The first involves qualitatively explaining the high or low prediction accuracies of characteristic parameters in the three types of AM, SS, and HEA-IM alloys by employing a simple division across the entire range of the characteristic parameter. They chose the SVM algorithm as a predictive model and used atomic size difference (δ), mixing enthalpy (Δ𝐻𝑚𝑖𝑥), electronegativity difference (Δχ) and mixing entropy (Δ𝑆𝑚𝑖𝑥) as descriptors.
In addition to W. Zhang, Nguyen et al. [79] also conducted research on SVM prediction. They employed the SVM method with hyperparameter tuning and the use of weighted values for the prediction of the alloy’s phase. They wrote a Python program to create a multi-principal element alloy (MPEA) and HEA dataset. Search operations are conducted to optimize the SVM hyperparameters. Finally, cross-validation is used to evaluate the accuracy of the prediction models utilizing the SVM algorithm with the optimized hyperparameters. In addition, they compared their SVM solution with the artificial neural networks (ANN) method, demonstrating that the SVM approach outperforms or is comparable to alternative methods employing the ANN method. Through experimental validation, they showed that incorporating the average melting point and standard deviation of melting point variables into the original dataset can enhance the prediction accuracy for MPEAs and HEAs. The conclusion drawn is that accurately predicting the structure of alloys contributes to an efficient search for new materials, providing feasible candidate materials for various applications that require materials with specific phases. Consequently, combining the SVM method with other ML algorithms is worthwhile for predicting the phases of MPEAs.
The above studies collectively demonstrate the superiority of using the SVM model for predicting and designing HEAs. In comparison to other ML models and algorithms, the SVM model proves to be more effective in handling multi-parameter issues, providing significant assistance in the design of HEAs.

3.3. Gaussian Process (GP) Model

The Gaussian process (GP) model is a statistical model that defines a distribution over functions, embodying a collection of random variables where any finite subset exhibits a joint Gaussian distribution [80]. Within the realm of ML, GP models find extensive application in regression, classification, and optimization tasks [81]. The fundamental concept underlying GP models is the representation of functions as random variables, characterized by a mean function and a covariance function. The mean function delineates the expected value of the function at each point, while the covariance function captures interdependencies between distinct points in the input space.
Tancret et al. [36] design HEAs employing GP statistical analysis. The datasets include 322 alloys reported in the literature. In the realm of HEA design, the solitary application of any single method proves insufficient for the dependable prediction of a singular solid-solution formation. Rather, a robust strategy is introduced, grounded in a critical evaluation of existing criteria and a statistical analysis leveraging GP models. This innovative approach concurrently considers a multitude of previously proposed criteria, providing a comprehensive method for predicting the emergence of a single solid solution. Thus, it stands as an invaluable guide for the design of novel HEAs.

3.4. K-Nearest Neighbors (KNN) Model

The k-nearest neighbors (KNN) model is a machine learning model used for classification and regression tasks [82]. In the context of this study, the KNN model is employed to predict mechanical properties, specifically tensile strength and hardness [83]. The KNN model operates based on the principle of similarity [84]. Given a new data point, the algorithm identifies the ‘k’ nearest data points from the training dataset in the feature space. The prediction for the new data point is then determined by the average or weighted average of the outcomes of its k-nearest neighbors [85].
Raheleh et al. [65] employed a graph-based KNN approach to predict the phase of HEAs. Each HEA compound has its distinct phase, falling into five categories: FCC, BCC, HCP, multiphase, and amorphous. A composition phase signifies a material state with a specific energy level. The effectiveness of the phase prediction lies in determining the practical applications of the material. Within the network, each compound has neighboring counterparts, and the phase of a new compound can be predicted based on the phase of its most similar neighbors. The proposed approach was implemented on the HEA network. The experimental results demonstrate that the accuracy of this method in predicting the phase of new alloys is 88.88%, surpassing that of other machine learning methods.

3.5. Random Forests (RFs) Algorithm

The random forests (RFs) algorithm is a ML algorithm that belongs to the ensemble learning category, utilized for both classification and regression tasks [86]. It constructs multiple decision trees during the training phase and aggregates their predictions for robust and accurate results. The algorithm initiates the process by creating numerous bootstrap samples from the original dataset. Each sample is then employed to train an individual decision tree [87]. To introduce diversity and prevent overfitting, a random subset of features is considered at each node of the decision tree. After training the decision trees, the algorithm combines their predictions. For regression tasks, the final prediction is the average of the individual tree predictions [88]. In classification tasks, the mode of the predictions is considered. The RFs algorithm is renowned for its resilience, adaptability, and effectiveness in handling high-dimensional datasets [89]. It is particularly valuable due to its ability to mitigate overfitting compared to individual decision trees, contributing to enhanced predictive accuracy.
Krishna et al. [90] utilized a ML approach to predict the multiphase alloy system, characterized by a combination of solid-solution and intermetallic phases (SS + IM), using a dataset of 636 alloys. In the investigation of the RF classifier, parameters are varied for n estimators, representing the number of trees in the forest, and maximum depth, ranging from the root node to the leaf node. The range for n estimators is set from 10 to 190 with a 20-unit interval, while the maximum depth varies from three to fourteen. It is determined that, for the current investigation, the optimal parameter values for n estimators are 50, with a maximum depth of 13. These parameter values yield an average cross-validation score of 0.788 based on a set of five cross-validation folds.
However, every model has both advantages and limitations. Four problems in the use of NNs in data modelling are overfitting, chance effects, overtraining, and interpretation [91]. Jack V. Tu draws a similar conclusion in his work [92] and depicts the typical relationship between the network error and training duration.
As training progresses, the network error gradually decreases until reaching a minimum in the training set. However, the error in the test set may initially decrease and then begin to rise as the network starts overfitting the training data. It is common practice among neural network developers to periodically cross-validate the network on the test set during training and to save the network weight configuration based on either of two criteria: (1) the network with the minimum error in the training set or (2) the network with the minimum error in the test set. The latter technique is often used to prevent the network from overtraining and overfitting.
The weaknesses of the SVM algorithm include algorithmic complexity, inefficiency in multi-classification, and imbalanced datasets [93]. J. Cervantes et al. also show some of the approaches used to improve the training time of the SVM algorithm. Eliminating data that are less likely to be support vectors is a crucial step. In addition, it is more efficient to decompose the dataset into multiple chunks and optimize each chunk separately.
The limitation of the GP model is mainly the inefficiency in dealing with high-dimensional data and non-stationary data. The KNN model shows slow prediction on large-scale datasets. The RFs algorithm performs well on large-scale datasets but may perform poorly when dealing with highly correlated features [94]. All the advantages and limitations of these five models are shown in Table 1. Researchers need to select the appropriate model or combination of models based on the specific characteristics of the research subject.
Table 1. Advantages and limitations of NNs, SVM, GP, KNN, and RFs. Data from Refs. [93][94].
In addition to the five ML models and algorithms above, some other common ML models and algorithms, such as principal component analysis (PCA) [95][96] and logistic regression (LR) [97], are used in design of HEAs as well. However, it should be pointed out that a large amount of high-quality data is still needed for the establishment and generalizing of the common ML models and algorithms. Advanced ML models and algorithms should be further explored.

References

  1. Cantor, B.; Chang, I.T.H.; Knight, P.; Vincent, A.J.B. Microstructural development in equiatomic multicomponent alloys. Mater. Sci. Eng. A 2004, 375–377, 213–218.
  2. Yeh, J.W.; Chen, S.K.; Lin, S.J.; Gan, J.Y.; Chin, T.S.; Shun, T.T.; Tsau, C.H.; Chang, S.Y. Nanostructured high-entropy alloys with multiple principal elements: Novel alloy design concepts and outcomes. Adv. Eng. Mater. 2004, 6, 299–303.
  3. Miracle, D.B.; Senkov, O.N. A critical review of high entropy alloys and related concepts. Acta Mater. 2017, 122, 448–511.
  4. Dippo, O.F.; Vecchio, K.S. A universal configurational entropy metric for high-entropy materials. Scr. Mater. 2021, 201, 113974.
  5. Marik, S.; Motla, K.; Varghese, M.; Sajilesh, K.; Singh, D.; Breard, Y.; Boullay, P.; Singh, R. Superconductivity in a new hexagonal high-entropy alloy. Phys. Rev. Mater. 2019, 3, 060602.
  6. Yeh, J.W. Recent progress in high-entropy alloys. Ann. Chim. Sci. Mat. 2006, 31, 633–648.
  7. Zhang, Y.; Zuo, T.T.; Tang, Z.; Gao, M.C.; Dahmen, K.A.; Liaw, P.K.; Lu, Z.P. Microstructures and properties of high-entropy alloys. Prog. Mater. Sci. 2014, 61, 1–93.
  8. Chang, X.; Zeng, M.; Liu, K.; Fu, L. Phase engineering of high-entropy alloys. Adv. Mater. 2020, 32, 1907226.
  9. Han, C.; Fang, Q.; Shi, Y.; Tor, S.B.; Chua, C.K.; Zhou, K. Recent advances on high-entropy alloys for 3D printing. Adv. Mater. 2020, 32, 1903855.
  10. Wang, B.; Yang, C.; Shu, D.; Sun, B. A Review of Irradiation-Tolerant Refractory High-Entropy Alloys. Metals 2023, 14, 45.
  11. Zhang, C.; Jiang, X.; Zhang, R.; Wang, X.; Yin, H.; Qu, X.; Liu, Z.K. High-throughput thermodynamic calculations of phase equilibria in solidified 6016 Al-alloys. Comp. Mater. Sci. 2019, 167, 19–24.
  12. Li, R.; Xie, L.; Wang, W.Y.; Liaw, P.K.; Zhang, Y. High-throughput calculations for high-entropy alloys: A brief review. Front. Mater. 2020, 7, 290.
  13. Yang, X.; Wang, Z.; Zhao, X.; Song, J.; Zhang, M.; Liu, H. MatCloud: A high-throughput computational infrastructure for integrated management of materials simulation, data and resources. Comp. Mater. Sci. 2018, 146, 319–333.
  14. Feng, R.; Liaw, P.K.; Gao, M.C.; Widom, M. First-principles prediction of high-entropy-alloy stability. NPJ Comput. Mater. 2017, 3, 50.
  15. Gao, J.; Zhong, J.; Liu, G.; Yang, S.; Song, B.; Zhang, L.; Liu, Z. A machine learning accelerated distributed task management system (Malac-Distmas) and its application in high-throughput CALPHAD computation aiming at efficient alloy design. Adv. Powder Technol. 2022, 1, 100005.
  16. Feng, R.; Zhang, C.; Gao, M.C.; Pei, Z.; Zhang, F.; Chen, Y.; Ma, D.; An, K.; Poplawsky, J.D.; Ouyang, L. High-throughput design of high-performance lightweight high-entropy alloys. Nat. Commun. 2021, 12, 4329.
  17. Gruetzemacher, R.; Whittlestone, J. The transformative potential of artificial intelligence. Futures 2022, 135, 102884.
  18. Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568.
  19. Batra, R.; Song, L.; Ramprasad, R. Emerging materials intelligence ecosystems propelled by machine learning. Nat. Rev. Mater. 2021, 6, 655–678.
  20. Heidenreich, J.N.; Gorji, M.B.; Mohr, D. Modeling structure-property relationships with convolutional neural networks: Yield surface prediction based on microstructure images. Int. J. Plast. 2023, 163, 103506.
  21. Huang, E.W.; Lee, W.J.; Singh, S.S.; Kumar, P.; Lee, C.Y.; Lam, T.N.; Chin, H.H.; Lin, B.H.; Liaw, P.K. Machine-learning and high-throughput studies for high-entropy materials. Mater. Sci. Eng. R. 2022, 147, 100645.
  22. Ren, W.; Zhang, Y.F.; Wang, W.L.; Ding, S.J.; Li, N. Prediction and design of high hardness high entropy alloy through machine learning. Mater. Des. 2023, 235, 112454.
  23. Liu, Y.; Zhao, T.; Ju, W.; Shi, S. Materials discovery and design using machine learning. J. Materiomics 2017, 3, 159–177.
  24. Oneto, L.; Navarin, N.; Biggio, B.; Errica, F.; Micheli, A.; Scarselli, F.; Bianchini, M.; Demetrio, L.; Bongini, P.; Tacchella, A. Towards learning trustworthily, automatically, and with guarantees on graphs: An overview. Neurocomputing 2022, 493, 217–243.
  25. Zhang, Y.; Ling, C. A strategy to apply machine learning to small datasets in materials science. NPJ Comput. Mater. 2018, 4, 25.
  26. Schleder, G.R.; Padilha, A.C.; Acosta, C.M.; Costa, M.; Fazzio, A. From DFT to machine learning: Recent approaches to materials science—A review. J. Phys. Mater. 2019, 2, 032001.
  27. Raccuglia, P.; Elbert, K.C.; Adler, P.D.; Falk, C.; Wenny, M.B.; Mollo, A.; Zeller, M.; Friedler, S.A.; Schrier, J.; Norquist, A.J. Machine-learning-assisted materials discovery using failed experiments. Nature 2016, 533, 73–76.
  28. He, H.; Wang, Y.; Qi, Y.; Xu, Z.; Li, Y.; Wang, Y. From Prediction to Design: Recent Advances in Machine Learning for the Study of 2D Materials. Nano Energy 2023, 118, 108965.
  29. Lee, S.Y.; Byeon, S.; Kim, H.S.; Jin, H.; Lee, S. Deep learning-based phase prediction of high-entropy alloys: Optimization, generation, and explanation. Mater. Des. 2021, 197, 109260.
  30. Zhang, Y.; Wen, C.; Wang, C.; Antonov, S.; Xue, D.; Bai, Y.; Su, Y. Phase prediction in high entropy alloys with a rational selection of materials descriptors and machine learning models. Acta Mater. 2020, 185, 528–539.
  31. Shapeev, A. Accurate representation of formation energies of crystalline alloys with many components. Comput. Mater. Sci. 2016, 139, 26–30.
  32. Song, K.; Xing, J.; Dong, Q. Optimization of the processing parameters during internal oxidation of Cu-Al alloy powders using an artificial neural network. Mater. Des. 2005, 26, 337–341.
  33. Sun, Y.; Zeng, W.D.; Zhao, Y.Q.; Qi, Y.L.; Ma, X.; Han, Y.F. Development of constitutive relationship model of Ti600 alloy using artificial neural network, Comput. Mater. Sci. 2010, 48, 686–691.
  34. Su, J.; Dong, Q.; Liu, P.; Li, H.; Kang, B. Prediction of Properties in Thermomechanically Treated Cu-Cr-Zr Alloy by an Artificial Neural Network. J. Mater. Sci. Technol. 2003, 19, 529–532.
  35. Lederer, Y.; Toher, C.; Vecchio, K.S.; Curtarolo, S. The search for high entropy alloys: A high-throughput ab-initio approach. Acta Mater. 2018, 159, 364–383.
  36. Tancret, F.; Toda-Caraballo, I.; Menou, E.; Díaz-Del, P.E.J.R. Designing high entropy alloys employing thermodynamics and Gaussian process statistical analysis. Mater. Des. 2017, 115, 486–497.
  37. Grabowski, B.; Ikeda, Y.; Srinivasan, P.; Körmann, F.; Freysoldt, C.; Duff, A.I.; Shapeev, A.; Neugebauer, J. Ab initio vibrational free energies including anharmonicity for multicomponent alloys. NPJ Comput. Mater. 2019, 5, 80.
  38. Malinov, S.; Sha, W.; McKeown, J. Modelling the correlation between processing parameters and properties in titanium alloys using artificial neural network. Comput. Mater. Sci. 2001, 21, 375–394.
  39. Warde, J.; DM, K. Use of neural networks for alloy design. ISIJ Int. 1999, 39, 1015–1019.
  40. Sun, Y.; Zeng, W.; Zhao, Y.; Zhang, X.; Shu, Y.; Zhou, Y. Modeling constitutive relationship of Ti40 alloy using artificial neural network. Mater. Des. 2011, 32, 1537–1541.
  41. Lin, Y.; Zhang, J.; Zhong, J. Application of neural networks to predict the elevated temperature flow behavior of a low alloy steel. Comput. Mater. Sci. 2008, 43, 752–758.
  42. Haghdadi, N.; Zarei-Hanzaki, A.; Khalesian, A.; Abedi, H. Artificial neural network modeling to predict the hot deformation behavior of an A356 aluminum alloy. Mater. Des. 2013, 49, 386–391.
  43. Dewangan, S.K.; Samal, S.; Kumar, V. Microstructure exploration and an artificial neural network approach for hardness prediction in AlCrFeMnNiWx High-Entropy Alloys. J. Alloys Compd. 2020, 823, 153766.
  44. Zhou, Z.; Zhou, Y.; He, Q.; Ding, Z.; Li, F.; Yang, Y. Machine learning guided appraisal and exploration of phase design for high entropy alloys. npj Comput. Mater. 2019, 5, 128.
  45. Li, Y.; Guo, W. Machine-learning model for predicting phase formations of high-entropy alloys. Phys. Rev. Mater. 2019, 3, 095005.
  46. Wen, C.; Zhang, Y.; Wang, C.; Xue, D.; Bai, Y.; Antonov, S.; Dai, L.; Lookman, T.; Su, Y. Machine learning assisted design of high entropy alloys with desired property. Acta Mater. 2019, 170, 109–117.
  47. Singh, J.; Singh, S. Support vector machine learning on slurry erosion characteristics analysis of Ni-and Co-alloy coatings. Surf. Rev. Let. 2023, 2340006.
  48. Alajmi, M.S.; Almeshal, A.M. Estimation and optimization of tool wear in conventional turning of 709M40 alloy steel using support vector machine (SVM) with Bayesian optimization. Materials 2021, 14, 3773.
  49. Lu, W.C.; Ji, X.B.; Li, M.J.; Liu, L.; Yue, B.H.; Zhang, L.M. Using support vector machine for materials design. Adv. Manuf. 2013, 1, 151–159.
  50. Nain, S.S.; Garg, D.; Kumar, S. Evaluation and analysis of cutting speed, wire wear ratio, and dimensional deviation of wire electric discharge machining of super alloy Udimet-L605 using support vector machine and grey relational analysis. Adv. Manuf. 2018, 6, 225–246.
  51. Lei, C.; Mao, J.; Zhang, X.; Wang, L.; Chen, D. Crack prediction in sheet forming of zirconium alloys used in nuclear fuel assembly by support vector machine method. Energy Rep. 2021, 7, 5922–5932.
  52. Xiang, G.; Zhang, Q. Multi-object optimization of titanium alloy milling process using support vector machine and NSGA-II algorithm. Int. J. Simul. Syst. Sci. Technol. 2016, 17, 35.
  53. Kong, D.; Chen, Y.; Li, N.; Duan, C.; Lu, L.; Chen, D. Tool wear estimation in end milling of titanium alloy using NPE and a novel WOA-SVM model. IEEE. Trans. Instrum. Meas. 2019, 69, 5219–5232.
  54. Caixu, Y.; Zhenlong, X.; Xianli, L.; Mingwei, Z. Chatter prediction of milling process for titanium alloy thin-walled workpiece based on EMD-SVM. J. Adv. Manuf. Sci. Technol. 2022, 2, 2022010.
  55. Meshkov, E.A.; Novoselov, I.I.; Shapeev, A.; Yanilkin, V.A. Sublattice formation in CoCrFeNi high-entropy alloy. Intermetallics 2019, 112, 106542.
  56. Park, S.M.; Lee, T.; Lee, J.H.; Kang, J.S.; Kwon, M.S. Gaussian process regression-based Bayesian optimization of the insulation-coating process for Fe–Si alloy sheets. J. Mater. Res. Technol. 2023, 22, 3294–3301.
  57. Khatamsaz, D.; Vela, B.; Arróyave, R. Multi-objective Bayesian alloy design using multi-task Gaussian processes. Mater. Lett. 2023, 351, 135067.
  58. Tancret, F. Computational thermodynamics, Gaussian processes and genetic algorithms: Combined tools to design new alloys. Model. Simul. Mat. Sci. Eng. 2013, 21, 045013.
  59. Mahmood, M.A.; Rehman, A.U.; Karakaş, B.; Sever, A.; Rehman, R.U.; Salamci, M.U.; Khraisheh, M. Printability for additive manufacturing with machine learning: Hybrid intelligent Gaussian process surrogate-based neural network model for Co-Cr alloy. J. Mech. Behav. Biomed. Mater. 2022, 135, 105428.
  60. Sabin, T.; Bailer-Jones, C.; Withers, P. Accelerated learning using Gaussian process models to predict static recrystallization in an Al-Mg alloy. Model. Simul. Mat. Sci. Eng. 2000, 8, 687.
  61. Gong, X.; Yabansu, Y.C.; Collins, P.C.; Kalidindi, S.R. Evaluation of Ti–Mn Alloys for Additive Manufacturing Using High-Throughput Experimental Assays and Gaussian Process Regression. Materials 2020, 13, 4641.
  62. Hasan, M.S.; Kordijazi, A.; Rohatgi, P.K.; Nosonovsky, M. Triboinformatic modeling of dry friction and wear of aluminum base alloys using machine learning algorithms. Tribol. Int. 2021, 161, 107065.
  63. Bobbili, R.; Ramakrishna, B. Prediction of phases in high entropy alloys using machine learning. Mater. Today Commun. 2023, 36, 106674.
  64. Huang, W.; Martin, P.; Zhuang, H.L. Machine-learning phase prediction of high-entropy alloys. Acta Mater. 2019, 169, 225–236.
  65. Ghouchan Nezhad Noor Nia, R.; Jalali, M.; Houshmand, M. A Graph-Based k-Nearest Neighbor (KNN) Approach for Predicting Phases in High-Entropy Alloys. Appl. Sci. 2022, 12, 8021.
  66. Gupta, A.K.; Chakroborty, S.; Ghosh, S.K.; Ganguly, S. A machine learning model for multi-class classification of quenched and partitioned steel microstructure type by the k-nearest neighbor algorithm. Comput. Mater. Sci. 2023, 228, 112321.
  67. Zhang, J.; Wu, J.F.; Yin, A.; Xu, Z.; Zhang, Z.; Yu, H.; Lu, Y.; Liao, W.; Zheng, L. Grain size characterization of Ti-6Al-4V titanium alloy based on laser ultrasonic random forest regression. Appl. Optics 2022, 62, 735–744.
  68. Zhang, Z.; Yang, Z.; Ren, W.; Wen, G. Random forest-based real-time defect detection of Al alloy in robotic arc welding using optical spectrum. J. Manuf. Process. 2019, 42, 51–59.
  69. Prieto, A.; Prieto, B.; Ortigosa, E.M.; Ros, E.; Pelayo, F.; Ortega, J.; Rojas, I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016, 214, 242–268.
  70. Yuste, R. From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 2015, 16, 487–497.
  71. Islam, M.M.; Murase, K. A new algorithm to design compact two-hidden-layer artificial neural networks. Neural. Netw. 2001, 14, 1265–1278.
  72. Apicella, A.; Donnarumma, F.; Isgrò, F.; Prevete, R. A survey on modern trainable activation functions. Neural Netw. 2021, 138, 14–32.
  73. Wang, J.; Kwon, H.; Kim, H.S.; Lee, B.J. A neural network model for high entropy alloy design. NPJ Comput. Mater. 2023, 9, 60.
  74. Aslani, M.; Seipel, S. Efficient and decision boundary aware instance selection for support vector machines. Inf. Sci. 2021, 577, 579–598.
  75. Chapelle, O.; Haffner, P.; Vapnik, V.N. Support vector machines for histogram-based image classification. IEEE Trans. Neural Netw. 1999, 10, 1055–1064.
  76. Xu, X.; Liang, T.; Zhu, J.; Zheng, D.; Sun, T. Review of classical dimensionality reduction and sample selection methods for large-scale data processing. Neurocomputing 2019, 328, 5–15.
  77. Hussain, S.F. A novel robust kernel for classifying high-dimensional data using Support Vector Machines. Expert Syst. Appl. 2019, 131, 116–131.
  78. Zhang, W.; Li, P.; Wang, L.; Wan, F.; Wu, J.; Yong, L. Explaining of prediction accuracy on phase selection of amorphous alloys and high entropy alloys using support vector machines in machine learning. Mater. Today Commun. 2023, 35, 105694.
  79. Chau, N.H.; Kubo, M.; Hai, L.V.; Yamamoto, T. Support Vector Machine-Based Phase Prediction of Multi-Principal Element Alloys. Vietnam. J. Comput. Sci. 2022, 10, 101–116.
  80. Li, P.; Chen, S. Gaussian process approach for metric learning. Pattern Recognit. 2019, 87, 17–28.
  81. Liu, H.; Ong, Y.S.; Shen, X.; Cai, J. When Gaussian process meets big data: A review of scalable GPs. IEEE Trans. Neural Netw. Learn Syst. 2020, 31, 4405–4423.
  82. Ertuğrul, Ö.F.; Tağluk, M.E. A novel version of k nearest neighbor: Dependent nearest neighbor. Appl. Soft Comput. 2017, 55, 480–490.
  83. Adithiyaa, T.; Chandramohan, D.; Sathish, T. Optimal prediction of process parameters by GWO-KNN in stirring-squeeze casting of AA2219 reinforced metal matrix composites. Mater. Today Proc. 2020, 21, 1000–1007.
  84. Jahromi, M.Z.; Parvinnia, E.; John, R. A method of learning weighted similarity function to improve the performance of nearest neighbor. Inf. Sci. 2009, 179, 2964–2973.
  85. Chen, Y.; Hao, Y. A feature weighted support vector machine and K-nearest neighbor algorithm for stock market indices prediction. Expert Syst. Appl. 2017, 80, 340–355.
  86. Utkin, L.V.; Kovalev, M.S.; Coolen, F.P. Imprecise weighted extensions of random forests for classification and regression. Appl. Soft Comput. 2020, 92, 106324.
  87. Özçift, A. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis. Comput. Biol. Med. 2011, 41, 265–271.
  88. Rokach, L. Decision forest: Twenty years of research. Inf. Fusion 2016, 27, 111–125.
  89. Yang, L.; Shami, A. IoT data analytics in dynamic environments: From an automated machine learning perspective. Eng. Appl. Artif. Intell. 2022, 116, 105366.
  90. Krishna, Y.V.; Jaiswal, U.K.; Rahul, M. Machine learning approach to predict new multiphase high entropy alloys. Scr. Mater. 2021, 197, 113804.
  91. Livingstone, D.J.; Manallack, D.T.; Tetko, I.V. Data modelling with neural networks: Advantages and limitations. J. Comput.-Aided Mol. Des. 1997, 11, 135–142.
  92. Tu, V.J. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. J. Clin. Epidemiol. 1996, 49, 1225–1231.
  93. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020, 408, 189–215.
  94. Brieuc, M.S.; Waters, C.D.; Drinan, D.P.; Naish, K.A. A practical introduction to Random Forest for genetic association studies in ecology and evolution. Mol. Ecol. Resour. 2018, 18, 755–766.
  95. Kasdekar, D.K.; Parashar, V. Principal component analysis to optimize the ECM parameters of Aluminium alloy. Mater. Today Proc. 2018, 5, 5398–5406.
  96. Sonawane, S.A.; Kulkarni, M.L. Optimization of machining parameters of WEDM for Nimonic-75 alloy using principal component analysis integrated with Taguchi method. J. King Saud Univ. Eng. Sci. 2018, 30, 250–258.
  97. Bouchard, M.; Mergler, D.; Baldwin, M.; Panisset, M.; Roels, H.A. Neuropsychiatric symptoms and past manganese exposure in a ferro-alloy plant. Neurotoxicology 2007, 28, 290–297.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : ,
View Times: 66
Revisions: 2 times (View History)
Update Date: 29 Feb 2024
1000/1000