Applications of Machine Learning in Fluid Mechanics: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

The significant growth of artificial intelligence (AI) methods in machine learning (ML) and deep learning (DL) has opened opportunities for fluid dynamics and its applications in science, engineering and medicine. Developing AI methods for fluid dynamics encompass different challenges than applications with massive data, such as the Internet of Things.

  • artificial intelligence
  • machine learning
  • deep learning
  • neural networks
  • fluids

1. Introduction

In the past few years, machine learning (ML) and deep learning (DL) have been pursued in most social, scientific, engineering, and industry branches. Initially established at the point where computer and statistical science meet, ML evolution has been driven by advancements in Artificial Intelligence (AI) and its related algorithms, fortified by the “big data” and access to cost-effective computational architectures. ML-based approaches span fields such as construction engineering, aerospace, biomedicine, materials science, education, financial modelling, and marketing [1][2].
Machine learning techniques fall into three groups: unsupervised, semi-supervised, and supervised [2][3]. This text considers DL as a subset of the broader ML field. The supervised approaches rely on labelled data to train the ML algorithm effectively. This is the most common category, with wide applicability in science and technology [4][5][6][7]. Unsupervised ML involves extracting features from high-dimensional data sets without the need for pre-labelled training data. Well-established clustering methods, such as K-means, hierarchical agglomerative clustering, and DBSCAN [8][9], apply here. Furthermore, dimensionality reduction techniques such as proper orthogonal decomposition (POD) [10], principal component analysis (PCA) [11][12] and dynamic mode decomposition (DMD) [13] emerged from fundamental statistical analysis, and Reduced Order Model techniques have also been bound to unsupervised ML. Meanwhile, semi-supervised learning combines elements from supervised and unsupervised ML and has been successfully incorporated in applications concerning time-dependence and image data [14][15].
At the heart of ML theories, the algorithms proposed range from linear feed-forward models to complex DL implementations. To distinguish the complexity of various models, the algorithms may fall into shallow or deep ML, with possible combinations. Shallow learning (SL) algorithms, both in classification and regression tasks, involve linear and multiple linear algorithms, their linear variants such as LASSO and Ridge, support vector machines (SVM), stochastic methods based on Gaussian processes and Bayes’ theorem, and decision trees [16]. Ensemble and super-learner methods, combining multiple shallow algorithms in more complex instances, such as the random forest, gradient boosting, and extremely randomized trees, have also increased in popularity in the past years [17].
Notwithstanding its wide acceptance, the applicability of SL is limited to small-to-medium-sized data sets, which might be a problem in demanding fluid mechanics tasks. Thus, the need for more robust methods has emerged, and DL has come to the fore [18]. The idea of DL is based on the multi-layer perceptron, the artificial neural network (ANN) architecture that can be seen as the corresponding artificial mechanism that mimics the biological functions of the neural networks of the human brain [6].
DL models have embedded complex mathematical concepts and advanced computer programming techniques in a great number of interconnected nodes and layers [19], making them an appropriate choice for dealing with applications that involve image and video processing [20][21][22][23], speech recognition [24], biological data manipulation [25], and flow reconstruction [26]. Nevertheless, a physical interpretation of the outcome is still lacking, which limits their application in physical sciences, where interpretability has a central role. In such cases, it would be beneficial to adopt models that produce meaningful results, i.e., mathematical expressions, which can spot correlations with existing empirical relations and propose an analytical approach bound to physical laws.
Conventional neural networks operate solely on data and are often considered “black box” models. On the other hand, symbolic regression (SR) is a class of ML algorithms in which mathematical equations bound exclusively to data are derived. In contrast to usual regression procedures, SR searches within a wide mathematical operator set and constants to connect input features to produce meaningful outputs [27] based on evolutionary algorithms. This process of extracting results that can be interpreted and exploited simultaneously has led to the development of numerous genetic programming-based tools that implement SR for disciplines from materials science to construction engineering to medical science [28].
ML methods have been employed in fluid dynamics research but have not been in engineering practice. Computational fluid dynamics (CFD) discretizes space using meshes: as the grid density increases, so does the accuracy, but at the expense of computational cost. Such techniques have benefited the aerospace, automotive and biomedical industries. The challenges are numerical and physical, and several reviews have been written on specific industrial applications or areas of physics [29][30][31][32]. However, despite significant advances in the field, there is still a need to improve efficiency and accuracy.
The most critical example is turbulence, a flow regime characterised by velocity and pressure fluctuations, instabilities, and rotating fluid structures (eddies) at vast, macroscopic and molecular scales. Accurately describing transient flow physics using CFD would require many grid points. Such an approach where the Navier–Stokes equations are directly applied is called direct numerical simulation (DNS) and is accompanied by tremendous computational overhead. As such, it is limited to specific problems and requires large high-performance computing (HPC) facilities and long run times. Consequently, reduced turbulent models are often used at the expense of accuracy.
Furthermore, while many such techniques exist, no universal model works consistently across all turbulence flows, so while many computational methods exist for different problems, the computational barrier is often unavoidable. Recent efforts in numerical modelling frequently focus on how to speed up computations, a task that would benefit many industries. A growing school of thought is to couple these numerical methods with data-driven algorithms to improve accuracy and performance. This is where ML becomes relevant.

2. Applications of ML in Fluid Mechanics

As mentioned in the Introduction, a long-standing problem in fluid dynamics is turbulence, a flow regime characterised by unpredictable, time-dependent velocity and pressure fluctuations [33][34][35][36]. The use of data-driven methods has increased significantly over the last few years, providing new modelling tools for complex turbulent flows [37]. ML has been used to model turbulence, such as the use of ANNs to predict the instantaneous velocity vector field in turbulent wake flows [38] and turbulent channel flows [39][40]. The popularity of many ML methods, especially deep learning, as a promising method for solving complex fluid flow problems is also supported by high-quality open-source data sets published by researchers [41][42][43].
Much effort is currently put into using ML algorithms to improve the accuracy of Reynolds-averaged Navier–Stokes (RANS) models for which GPs have been used to quantify and reduce uncertainties [44]. Tracey et al. [45] used an ANN that inputs flow properties and reconstructs the SA closure equations. The researchers studied how different components of ML algorithms, such as the choice of the cost function and feature scaling, affected their performance. Other examples include using DL to model the Reynolds stress anisotropy tensor [46][47]. Bayesian inference has also been used to optimize RANS model coefficients [48][49], as well as to derive functions that quantify and reduce the gap between model predictions and high-fidelity data, such as DNS data [50][51][52]. Furthermore, a Bayesian neural network (BNN) has been used to improve the predictions of RANS models and to specify the uncertainty associated with the forecast [53].
In the field of large eddy simulations (LES), an ANN was used to replace these subgrid models [54][55]. Convolutional neural networks (CNNs) have also been used to calculate subgrid closure terms [56]. Moreover, DL modelling using input vorticity and stream function stencils was implemented to predict a dynamic, time- and space-dependent closure term for the subgrid models [57]. More specifically, starting from the detailed DNS simulation output, the main focus was to employ low-resolution LES data to train a NN model and reconstruct fluid dynamics to replace DNS where possible. To this end, many relevant applications have emerged. Bao et al. [58] have suggested combining physics-guided, two-component, PDE-based techniques with recurrent temporal data models and super-resolution spatial data methods. An extension of this approach includes a multi-scale temporal path UNet (MST-UNet) model that can reconstruct both temporal and spatial flow information from low-resolution data [58].
Fukami et al. [59] have applied super-resolution techniques, common in image reconstruction applications, on 2D course-flow fields and managed to reconstruct laminar and turbulent flows. A more recent version of this method employed autoencoders to achieve order reduction for fluid flows [60]. A complete super-resolution case study is presented in another work, covering all the details of a CNN-based, super-resolution architecture for turbulent fluid flows with DNS training data for various Reynolds numbers [61]. Nevertheless, generative models are often incorporated because DNS data are hard to obtain. In the review of Buzzicotti [62], three prevalent models are discussed: variational autoencoders (VAE), GANS, and diffusion models. These models are based on CNNs, and their implementation is focused on reproducing the multiscale and multifrequency nature of fluid dynamics, which is more complex than classical image recognition applications.
Another exciting application of ML in fluid mechanics is related to food processing [63]. The ML models aim to optimize process parameters and kinetics for reduced energy consumption, speed up the processing time, and continuously improve product quality. Several food processing operations, including drying, frying, baking, and extrusion, each require the development of ML algorithms and their training and validation against accurate data.
Furthermore, ML has found application in many chemical engineering flows. These applications include transport in porous media, catalytic reactors, batteries, and carbon dioxide storage [64]. Many of these problems are linked to energy transition efforts such as batteries, where long-term safety, energy density, and cycle life are particularly interesting. Diverse ML applications in fluid mechanics include nanofluids and ternary hybrid nanofluids [65] as well as metallurgy processes [66].

This entry is adapted from the peer-reviewed paper 10.3390/fluids8070212

References

  1. Pugliese, R.; Regondi, S.; Marini, R. Machine learning-based approach: Global trends, research directions, and regulatory standpoints. Data Sci. Manag. 2021, 4, 19–29.
  2. Frank, M.; Drikakis, D.; Charissis, V. Machine-Learning Methods for Computational Science and Engineering. Computation 2020, 8, 15.
  3. Brunton, S.L.; Noack, B.R.; Koumoutsakos, P. Machine Learning for Fluid Mechanics. Annu. Rev. Fluid Mech. 2020, 52, 477–508.
  4. Singh, M.P.; Alatyar, A.M.; Berrouk, A.S.; Saeed, M. Numerical modelling of rotating packed beds used for CO2 capture processes: A review. Can. J. Chem. Eng. 2023.
  5. Guo, S.; Agarwal, M.; Cooper, C.; Tian, Q.; Gao, R.X.; Guo, W.; Guo, Y. Machine learning for metal additive manufacturing: Towards a physics-informed data-driven paradigm. J. Manuf. Syst. 2022, 62, 145–163.
  6. Nazemi, E.; Dinca, M.; Movafeghi, A.; Rokrok, B.; Choopan Dastjerdi, M. Estimation of volumetric water content during imbibition in porous building material using real time neutron radiography and artificial neural network. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrometers Detect. Assoc. Equip. 2019, 940, 344–350.
  7. Leverant, C.J.; Greathouse, J.A.; Harvey, J.A.; Alam, T.M. Machine Learning Predictions of Simulated Self-Diffusion Coefficients for Bulk and Confined Pure Liquids. J. Chem. Theory Comput. 2023, 19, 3054–3062.
  8. Sofos, F.; Charakopoulos, A.; Papastamatiou, K.; Karakasidis, T.E. A combined clustering/symbolic regression framework for fluid property prediction. Phys. Fluids 2022, 34, 062004.
  9. Papastamatiou, K.; Sofos, F.; Karakasidis, T.E. Calculating material properties with purely data-driven methods: From clusters to symbolic expressions. In Proceedings of the 12th Hellenic Conference on Artificial Intelligence (SETN ’22), Corfu, Greece, 7–9 September 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–9.
  10. Amsallem, D.; Farhat, C. Interpolation Method for Adapting Reduced-Order Models and Application to Aeroelasticity. AIAA J. 2008, 46, 1803–1813.
  11. Sofos, F.; Karakasidis, T. Nanoscale slip length prediction with machine learning tools. Sci. Rep. 2021, 11, 12520.
  12. Scherl, I.; Strom, B.; Shang, J.K.; Williams, O.; Polagye, B.L.; Brunton, S.L. Robust principal component analysis for modal decomposition of corrupt fluid flows. Phys. Rev. Fluids 2020, 5, 054401.
  13. Schmid, P.J. Dynamic Mode Decomposition and Its Variants. Annu. Rev. Fluid Mech. 2022, 54, 225–254.
  14. Deo, I.K.; Jaiman, R. Predicting waves in fluids with deep neural network. Phys. Fluids 2022, 34, 067108.
  15. M S, V.M.; Menon, V. Measuring Viscosity of Fluids: A Deep Learning Approach Using a CNN-RNN Architecture. In Proceedings of the First International Conference on AI-ML Systems, AIML Systems ’21, Bangalore, India, 21–23 October 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–5.
  16. Stergiou, K.; Ntakolia, C.; Varytis, P.; Koumoulos, E.; Karlsson, P.; Moustakidis, S. Enhancing property prediction and process optimization in building materials through machine learning: A review. Comput. Mater. Sci. 2023, 220, 112031.
  17. Hadavimoghaddam, F.; Ostadhassan, M.; Sadri, M.A.; Bondarenko, T.; Chebyshev, I.; Semnani, A. Prediction of Water Saturation from Well Log Data by Machine Learning Algorithms: Boosting and Super Learner. J. Mar. Sci. Eng. 2021, 9, 666.
  18. Guo, K.; Yang, Z.; Yu, C.H.; Buehler, M.J. Artificial intelligence and machine learning in design of mechanical materials. Mater. Horiz. 2021, 8, 1153–1172.
  19. Xu, H.; Zhang, D.; Zeng, J. Deep-learning of parametric partial differential equations from sparse and noisy data. Phys. Fluids 2021, 33, 037132.
  20. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25.
  21. Farabet, C.; Couprie, C.; Najman, L.; LeCun, Y. Learning Hierarchical Features for Scene Labeling. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1915–1929.
  22. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015.
  23. Jiang, Y.G.; Wu, Z.; Wang, J.; Xue, X.; Chang, S.F. Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 352–364.
  24. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.r.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N.; et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag. 2012, 29, 82–97.
  25. Leung, M.K.K.; Xiong, H.Y.; Lee, L.J.; Frey, B.J. Deep learning of the tissue-regulated splicing code. Bioinformatics 2014, 30, i121–i129.
  26. Callaham, J.L.; Maeda, K.; Brunton, S.L. Robust flow reconstruction from limited measurements via sparse representation. Phys. Rev. Fluids 2019, 4, 103907.
  27. Tohme, T.; Liu, D.; Youcef-Toumi, K. GSR: A Generalized Symbolic Regression Approach. arXiv 2022, arXiv:2205.15569.
  28. Angelis, D.; Sofos, F.; Karakasidis, T.E. Artificial Intelligence in Physical Sciences: Symbolic Regression Trends and Perspectives. Arch. Comput. Methods Eng. 2023, 30, 3845–3865.
  29. Rider, W.; Kamm, J.; Weirs, V. Verification, Validation, and Uncertainty Quantification for Coarse Grained Simulation. In Coarse Grained Simulation and Turbulent Mixing; Cambridge University Press: Cambridge, UK, 2016; pp. 168–189.
  30. Drikakis, D.; Kwak, D.; Kiris, C. Computational Aerodynamics: Advances and Challenges. Aeronaut. J. 2016, 120, 13–36.
  31. Norton, T.; Sun, D.W. Computational fluid dynamics (CFD) ‚Äì an effective and efficient design and analysis tool for the food industry: A review. Trends Food Sci. Technol. 2006, 17, 600–620.
  32. Kobayashi, T.; Tsubokura, M. CFD Application in Automotive Industry. In Notes on Numerical Fluid Mechanics and Multidisciplinary Design; Hirschel, E.H., Krause, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 100.
  33. Thornber, B.; Mosedale, A.; Drikakis, D. On the implicit large eddy simulations of homogeneous decaying turbulence. J. Comput. Phys. 2007, 226, 1902–1929.
  34. Jiménez, J. Near-wall turbulence. Phys. Fluids 2013, 25, 101302.
  35. Drikakis, D. Advances in turbulent flow computations using high-resolution methods. Prog. Aerosp. Sci. 2003, 39, 405–424.
  36. Kobayashi, H.; Matsumoto, E.; Fukushima, N.; Tanahashi, M.; Miyauchi, T. Statistical properties of the local structure of homogeneous isotropic turbulence and turbulent channel flows. J. Turbul. 2011, 12, N12.
  37. Duraisamy, K.; Iaccarino, G.; Xiao, H. Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 2019, 51, 357–377.
  38. Giralt, F.; Arenas, A.; Ferre-Gine, J.; Rallo, R.; Kopp, G. The simulation and interpretation of free turbulence with a cognitive neural system. Phys. Fluids 2000, 12, 1826–1835.
  39. Milano, M.; Koumoutsakos, P. Neural network modeling for near wall turbulent flow. J. Comput. Phys. 2002, 182, 1–26.
  40. Chang, F.J.; Yang, H.C.; Lu, J.Y.; Hong, J.H. Neural network modelling for mean velocity and turbulence intensities of steep channel flows. Hydrol. Process. Int. J. 2008, 22, 265–274.
  41. McConkey, R.; Yee, E.; Lien, F.S. A curated dataset for data-driven turbulence modelling. Sci. Data 2021, 8, 255.
  42. Bonnet, F.; Mazari, A.J.; Cinnella, P.; Gallinari, P. AirfRANS: High Fidelity Computational Fluid Dynamics Dataset for Approximating Reynolds-Averaged Navier-Stokes Solutions. arXiv 2023, arXiv:2212.07564.
  43. Ribeiro, M.D.; Rehman, A.; Ahmed, S.; Dengel, A. DeepCFD: Efficient Steady-State Laminar Flow Approximation with Deep Convolutional Neural Networks. arXiv 2021, arXiv:2004.08826.
  44. Xiao, H.; Wu, J.L.; Wang, J.X.; Sun, R.; Roy, C. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach. J. Comput. Phys. 2016, 324, 115–136.
  45. Tracey, B.D.; Duraisamy, K.; Alonso, J.J. A machine learning strategy to assist turbulence model development. In Proceedings of the 53rd AIAA Aerospace Sciences Meeting, Kissimmee, FL, USA, 5–9 January 2015; p. 1287.
  46. Ling, J.; Kurzawski, A.; Templeton, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 2016, 807, 155–166.
  47. Kutz, J.N. Deep learning in fluid dynamics. J. Fluid Mech. 2017, 814, 1–4.
  48. Cheung, S.H.; Oliver, T.A.; Prudencio, E.E.; Prudhomme, S.; Moser, R.D. Bayesian uncertainty analysis with applications to turbulence modeling. Reliab. Eng. Syst. Saf. 2011, 96, 1137–1149.
  49. Edeling, W.; Cinnella, P.; Dwight, R.P.; Bijl, H. Bayesian estimates of parameter variability in the k–ε turbulence model. J. Comput. Phys. 2014, 258, 73–94.
  50. Zhang, Z.J.; Duraisamy, K. Machine learning methods for data-driven turbulence modeling. In Proceedings of the 22nd AIAA Computational Fluid Dynamics Conference, Dallas, TX, USA, 22–26 June 2015; p. 2460.
  51. Duraisamy, K.; Zhang, Z.J.; Singh, A.P. New approaches in turbulence and transition modeling using data-driven techniques. In Proceedings of the 53rd AIAA Aerospace Sciences Meeting, Kissimmee, FL, USA, 5–9 January 2015; p. 1284.
  52. Parish, E.J.; Duraisamy, K. A paradigm for data-driven predictive modeling using field inversion and machine learning. J. Comput. Phys. 2016, 305, 758–774.
  53. Geneva, N.; Zabaras, N. Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks. J. Comput. Phys. 2019, 383, 125–147.
  54. Sarghini, F.; De Felice, G.; Santini, S. Neural networks based subgrid scale modeling in large eddy simulations. Comput. Fluids 2003, 32, 97–108.
  55. Moreau, A.; Teytaud, O.; Bertoglio, J.P. Optimal estimation for large-eddy simulation of turbulence and application to the analysis of subgrid models. Phys. Fluids 2006, 18, 105101.
  56. Beck, A.D.; Flad, D.G.; Munz, C.D. Deep neural networks for data-driven turbulence models. arXiv 2018, arXiv:1806.04482.
  57. Maulik, R.; San, O.; Rasheed, A.; Vedula, P. Subgrid modelling for two-dimensional turbulence using neural networks. J. Fluid Mech. 2019, 858, 122–144.
  58. Bao, T.; Chen, S.; Johnson, T.T.; Givi, P.; Sammak, S.; Jia, X. Physics guided neural networks for spatio-temporal super-resolution of turbulent flows. In Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR, Eindhoven, The Netherlands, 1–5 August 2022; pp. 118–128, ISSN 2640-3498.
  59. Fukami, K.; Fukagata, K.; Taira, K. Super-resolution reconstruction of turbulent flows with machine learning. J. Fluid Mech. 2019, 870, 106–120.
  60. Fukami, K.; Hasegawa, K.; Nakamura, T.; Morimoto, M.; Fukagata, K. Model Order Reduction with Neural Networks: Application to Laminar and Turbulent Flows. Comput. Sci. 2021, 2, 467.
  61. Fukami, K.; Fukagata, K.; Taira, K. Super-resolution analysis via machine learning: A survey for fluid flows. Theor. Comput. Fluid Dyn. 2023.
  62. Buzzicotti, M. Data reconstruction for complex flows using AI: Recent progress, obstacles, and perspectives. EPL 2023, 142, 23001.
  63. Khan, M.I.H.; Sablani, S.S.; Nayak, R.; Gu, Y. Machine learning-based modeling in food processing applications: State of the art. Compr. Rev. Food Sci. Food Saf. 2022, 21, 1409–1438.
  64. Marcato, A.; Boccardo, G.; Marchisio, D. From Computational Fluid Dynamics to Structure Interpretation via Neural Networks: An Application to Flow and Transport in Porous Media. Ind. Eng. Chem. Res. 2022, 61, 8530–8541.
  65. Dinesh Kumar, M.; Ameer Ahammad, N.; Raju, C.; Yook, S.J.; Shah, N.A.; Tag, S.M. Response surface methodology optimization of dynamical solutions of Lie group analysis for nonlinear radiated magnetized unsteady wedge: Machine learning approach (gradient descent). Alex. Eng. J. 2023, 74, 29–50.
  66. Priyadharshini, P.; Archana, M.V.; Ahammad, N.A.; Raju, C.; Yook, S.J.; Shah, N.A. Gradient descent machine learning regression for MHD flow: Metallurgy process. Int. Commun. Heat Mass Transf. 2022, 138, 106307.
More
This entry is offline, you can click here to edit this entry!
Video Production Service