1. Sustainable Design, Engineering, and Discovery of Innovative Nanomaterials
In the past, the high-throughput screening of nanomaterials has been performed with molecular simulations or ab-initio calculations for the theoretical determination of their properties based on their chemical structure
[1]. One current limitation is the efficiency when exploring the nanomaterials design space. The pluralism of nanomaterials and their corresponding functional properties could challenge the scientific community for years. Currently, there are many materials options which lack an application-oriented character, while any structured experimentation pathways have limited chances to lead to new knowledge and new nanomaterials invention. Even with the AI, design rules, and best candidates, the number of instances depends on the amount of data, while big data handling requires efficient algorithms since new obstacles can be introduced due to time and memory/processor demand.
More specifically, for the case of metal organic frameworks (MOFs), the design parameters and functionalization strategies should be optimized in order to identify synthetic paths for producing MOFs with an engineered distribution of active metallic sites within the porous network to deliver improved encapsulation efficiency and catalytic properties
[2]. Predictive models due to their accuracy can efficiently assist the screening process considering that the hypothetic structural combinations are in the scale of millions, as well as the manufacturing parameters that can realistically enable their synthesis. However, deployment to all possible MOF structures is restricted in the case of molecular simulations, due to the high-computational cost. In the era of high-throughput nanomaterials design, machine learning has shown the potential to overcome such deficiencies with expensive calculations without a potential cost in the proper establishment of structure–property relationships. This is an enabling mechanism to aid the exploration of novel nanomaterials in unchartered chemical space and to identify the best candidate materials tailored to the application requirements
[1]. The main bottleneck in this case, and for machine learning in general, is the access to enough and to high-fidelity data in order to enhance the innovation potential and identify hidden patterns, which contribute to the accuracy in future predictions. Since accessibility to high-quality data is satisfied, the impact of machine learning can be evidenced in critical areas, e.g., wise management of resources, time, and energy use, which contributes to the sustainability and viability of nanostructures development at the industrial level.
Machine learning has found success in nanoengineering, and more specifically, artificial neural networks (ANNs) were utilized by Li et al., in the modelling of pulse electrodeposition for composite Ni–TiN coatings to tune the nanogranular structure. The data were obtained from 45 different steel substrates to predict the sliding wear resistance of coatings with an error in the region of 4.2%. It was shown by the authors that a bigger grain size of TiN is evidenced when selecting a lower current density and a prolonged pulse interval, while the optimum wear resistance is obtained for the average crystallite sizes in the region between 39 and 58 nm
[3]. In another study involving the physicochemical profile of TiO
2 nanoparticles, the zeta potential prediction was enabled by ANNs. The temperature, pH, ionic strength, and mass content of aqueous dispersions were studied, with pH being the most influencing descriptor. Thus, the constructive model was regarded as a step beyond currently applied statistics, and more towards research on space exploration considering a wide range of conditions in silico, which due to technical and time constraints cannot be experimentally validated, but can offer a guidance on the next experiments. The impact in different industries, such as pigments and pharmaceuticals production, minerals processing, and construction, can be realized by tailorizing the synthesis or the manufacturing parameters towards an increased zeta potential value to favor the minimization of nanoparticle agglomeration events and to improve the sustainability of production when nanoparticles are used as reinforcements
[4].
Of course, the utility of ANNs for structural design and process optimization has an imminent impact in the exploration of the design space. However, it is process specific. Such algorithms can be enabling as soon as they are shared, but what cannot be shared with success are the models developed by training on limited amounts of data. Overtraining, which is connected to the excellent fitting of known data, is limiting when the model is deployed to new data, and what is required to go beyond that is the ability to access thousands of data. In this way, the resources used for training a model can reduce the time investment for new users seeking to use an already developed model to explore amongst a rich design space since the model can be reused due to generalizability and speed criteria which will be met.
Graphene research is another bright engineering field due to the ability to tune multiple factors at nanoscale, such as defects, edges, vacancies, corners, dopants, reconstructions, and adsorbates, which determine the remarkable properties of graphene in the form of optical transparency, electronic, mechanical and thermal properties. In theory the perfect graphene does not enable to utilize the bandgap functionality in transistors, which is facilitated by the engineering of a bandgap up to 0.3 eV by introducing Stone–Wales and vacancy-type defects, or to control reactivity and induce local charge by controlling the density of π-electrons. Motevalli et al., in their study modelled graphene oxide template structures to increase the insights in the frequency of defects and broken bonds based on structural features, which significantly affect the strength and the conductive properties of this modern nanomaterial. The dataset contained 20,000 different electronic structures, which were obtained by simulation, and machine learning was used to establish a predictive Bayesian network model using 829 structural features, while 223 features out of them were proved sufficient to achieve the development of an accurate model. For instance, in the case of graphene oxide the size, shape, edges and corners can be correlated to the distribution of broken bonds. The distribution of hydrogen atoms was found to be representative of the lattice rupture, while the presence of oxygen functional groups is connected to actual number of broken bonds. Moreover, the presence of ether and hydrogen was shown to affect the integrity of the carbon lattice, which can be used to enhance the impact properties and overall performance. Their methodology was optimized by the incorporation of the density functional tight binding (DFTB) method on a training set consisting of 20,396 instances. The modelled graphene oxide templates represent a surface varying from 320 Å
2 to 2457 Å
2, several functional groups (epoxide, ether, hydroxyl) at various compositions, and morphologies (triangular, rectangular, rhombic, hexagonal). This analysis corresponds to over 8,500,000 possible adsorption sites, and the model demonstrated generalization ability for developing transfer learning functionality to other nanomaterials with known structure, which emphasized the significance of the utilization of machine learning models in engineering/design decisions of graphene structures, as well as the contribution to nanoinformatics
[5].
In another case, machine learning has been used for the revelation of intrinsic electronic properties of graphene and rapid identification of the desired electronic properties by correlating the nanoscale features of graphene with or without undergoing functionalization treatment and represented by a structural footprint. The dataset is structured by 622 material options of computationally optimized graphene and the electron affinity (E
A), energy of the Fermi level (E
F), electronic band gap (E
G), and ionization potential (E
I) parameters have been modelled by Fernandez et al., demonstrating a high accuracy metric of
R2 ~0.9
[6]. The benefit of this approach is the ease of acquiring the experiment structural properties of graphene by characterization (surface area, geometry, edge type, aspect ratio), while often the characterization of the electronic properties is challenging, especially when graphene is produced at scale. In this research, the relevant electronic properties were acquired using DFTB simulation, while graphene structures were consisted of 16 to 2176 carbon atoms. GA optimization was applied to choose the best hyperparameters and features combinations to access a universal solution of the developed model, until the fitness score remains unchanged for 90% of the generations. This in-silico, high speed screening approach can facilitate future research by reducing the experimental time and investment required to gain deep insights in graphene nanostructures, especially when there is access to virtual libraries containing experimental data.
Machine learning has been also deployed for the precise prediction of crystallographic orientations at nanoscale to assist the experimental design and the lithographic preparation of those structures. In this case, Fernandez et al., used the electronegativity values of graphene nanoflakes with an absolute error lower than 0.5 eV and their molecular graph information to facilitate the rapid prediction of the energy gap and the determination of the topology of the nanoflakes accordingly. Such models can be used to moderate the control of molecular connectivity and edge characteristics and can find application beyond graphene structures with other 2D nanomaterials, and effectively support the screening process, as well as defining the correlation of functional and structural properties, especially when there is access to libraries with relevant data. In this research, the most accurate predictive model was established with support vector machines (SVM), optimized using a genetic algorithm (GA) in the background procedure, with a topological resolution distance of 1–42 atoms
[7].
The present status of graphene research and machine learning is currently guided by the simulation data, which can provide fundamental insights in the structure property relations offering larger training datasets which can feed the data-driven models and reveal properties relation which could not be traceable by human alone. The main concern is that the simulation methods generate data in a structured way while real materials production can introduce a manufacturing footprint, including structural defects and presence of randomly distributed heteroatoms affecting their properties. This is currently limited by the throughput, scale, and limitations of current characterization techniques which cannot provide the experimental validation of the simulated structures. Thus, it is challenging to actually connect graphene structure and manufacturing considering also the fact of the statistical character of graphene powders and films produced, which currently are sampled to extract any structure and property information. However, still, the mapping of intrinsic features of nanomaterials generated in-silico can provide a solid basis to push innovation and sustainability by identifying the best candidates for nanomaterials application and improving the resources allocation of goal-oriented experiments.
In-silico models for carbon nanodots (CDs) were developed by Han et al., to support the parameter optimization, which is a challenging task for these sophisticated materials which serve special applications due to their exotic properties, such as Fe
3+ ions sensing in solutions. In this case, the research bottleneck is identified in the quality of the process parameters, which often suffer from noise and their wide exploration space, which cannot be handled by a single researcher. Machine learning can address this challenge with success and assist the screening of high-quality CDs, due to its effective prediction, optimization, and fast acquisition of results capability. This research focused on the hydrothermal synthesis, which is well-established, and the prediction of the process-related fluorescent quantum yield (QY) was based on several descriptors, such as EDA volume, mass of precursor, reaction temperature, temperature ramp, and reaction time out of 391 experiments. More specifically, the XGBoost-R algorithm guided the experimentally verified synthesis of CDs out of 702,702 available combinations, exhibiting a strong green emission with QY up to 39.3%. The more important features for the engineering of CDs were shown to be the mass of the precursors and the volume of the alkaline catalysts in order to achieve high-QY and successfully bridge the gap between theory with the experiment, while even chemical descriptors incorporation can further support advanced research in the future and generalize the machine learning model
[8]. By providing the framework and sharing the methods and algorithms, there is the potential to cover the engineering aspects of the aforementioned graphene development approaches.
In another case, nanomaterials were in silico designed by Dewulf et al., to provide feedback to the bioinspired in vitro synthesis of high porous silica as a green alternative to the conventional synthesis. The bioinspired synthesis is characterized by the short reaction duration, the mild conditions at room temperature, and the use of an eco-friendly precursor (sodium metasilicate pentahydrate), which satisfy a viable and scalable approach able to offer promising deployment at industrial scale. In this case, the best-case scenario is that an agreement of predictive modelling and experiment occurs. Practically, what is needed is to provide a machine learning basis that beyond predictions can follow the experimental sequencing, apply automated corrections and crossvalidations to improve the model, and propose new experiments.
Furthermore, the sustainability of the process can be further supported by such a design of experiments by performing global sensitivity analysis in order to support the resource-efficient product and process development. In their approach, they used a factorial materials design 2
3 to improve reaction yield and surface area, which included the parameters of precursor concentration, pH, and reactant concentrations in the form of silicon to nitrogen ratio, in combination with optimization using a central composite design, leading to a yield of 90 mol%, while the highest surface area value that was obtained was 400 m
2 g
−1. Since the aforementioned properties are critical factors to ensure the successful commercialization of the materials, regression analysis was implemented along with global sensitivity analysis using the Sobol’ index in order to further improve the process. A central composite design and multivariate analysis were used to model the experimentally determined outcomes to assist the rapid identification of interactions and parameters that are correlated with physicochemical properties with high precision, using a wide parameter and experimental space. The main parameters involved in this machine learning approach were Si precursor concentration, pH, and the Si to N ratio to predict the reaction yield and the surface area. It was shown that Si precursor concentration and Si:N ratio determine the precipitation occurrence. An optimization regarding the reduction of the effort spent in experimental verification was realized by using a sequential design in order to efficiently perform pre-screening and screening, and subsequently the optimization of the experiments
[9].
Finally, in silico materials development has been demonstrated for organic structure directing agents (OSDAs), where eight different models were utilized using evolutionary algorithms. The dataset was consisted of 1,000,000 trial molecules, which were generated by MD. Machine learning was used for the prediction of the stabilization energy in comparison with the respective output of MD in order to decide on the synthetic pathway. The actual number of the compounds with a stabilization energy below −15 kJ/(mol Si) was conducted using ANN supported by an optimization generic algorithm, which resulted in a lower number of molecules. The training dataset was consisted of stabilization energies for 4781 which are going to be developed on putative zeolite, which were obtained through computationally intensive MD calculation, resulting to the in-silico generation of 469 exceptionally stable structures. Thus, an effective strategy for the design of OSDAs was proposed for zeolite beta with high correlation to the MD results
[10].
In order to go beyond the in-silico materials design and design space exploration, machine learning should utilize real world experimental data, satisfying the data quality standards. Another challenge is connected to the speed of experimental data acquisition and the amount of data analysis by each measurement which is dependent on human resources. Machine learning advances can efficiently automate the extraction of materials properties, e.g., in the case of carbon nanotubes (CNTs) to measure the diameter of a greater amount of instances compared to a single user, and for effectively increasing the statistical sample, and thus robustness of the derived data.
2. Contribution of High-Resolution Characterization Coupling with Machine Learning and Computer Vision to Structure High-Quality Materials Datasets for Materials Development
Machine learning comes to bridge the gap of high-quality materials information acquisition form high resolution techniques at high throughput by utilizing methods to support and detect the target features and properties that can be used for the in-silico development of materials or for the prediction of unknown properties of new or existing materials. Computer vision is also a tool that should not be excluded, noting that image analysis could deliberate automated processing of the recognition of key mechanical properties (stress-strain plots), and extraction of the mechanical properties of produced materials similar to a human analyst. The most prominent machine learning libraries for computer vision are Torch, Theano, Caffe, TensorFlow, and Keras
[11]. Already, scanning electron microscopy (SEM) has been used for identifying the successful synthesis of nanomaterials, to evaluate material quality by the observation of its surface, to investigate the (de-)bonding of hybrid structures, as well as the shape and distribution of the dimensions, and several efforts have been made to develop image vision AI models to support the automatic classification and annotation of images from different materials built on existing databases (over 150,000 images) with accuracy almost 90%
[12].
Sophisticated focused ion beam (FIB) characterization also uses digital image correlation for obtaining insights about the stress gradients and for imaging soft materials such as filled reinforced polymer nanocomposites
[13]. AI coupling with this technique has led to effective image-processing to achieve a super-resolution of 3D images and to reduce the observation times, by demonstrating superior restoration as the asymmetric resolution is increased. Moreover, images from X-ray spectroscopy surface mapping (EDX, EBDS) or Raman mapping are utilized for further image analysis, e.g., in clustering to identify the materials microstructural composition, which could be relevant to martensitic transformation and phase nucleation monitoring for steels thermal processing via elemental maps, for the examination of the bonding state in composites and hybridization of the bonds in the interfaces.
Another interesting high resolution imaging technique, scanning tunnelling microscopy (STM)
[14], was coupled with deep learning, which was utilized for automatic particle recognition, while the model “ParticlesNN” was deployed online as an open source tool to facilitate the extraction of nanoparticle information. The main benefits provided are the ability to handle images containing noisy data, to perform statistical processing in the form of histogram and tables for all the identified nanoparticles. The “ParticlesNN” web service also provides the flexibility to classify particles in the micrometer scale (with lower resolution limit than the technique used for establishing the machine learning models), while the input images can be derived from different instruments, such as SEM, due to the similarity of the image output. Thus, it is possible to maximize the output of an imaging technique and support the increased accuracy, quality of the material descriptors, and the number of annotated instances, to feed the machine learning models and establish a statistically representative connection between structure and properties.
Machine learning was used by Lee et al., in order to overcome the characterization techniques challenge of sensitive and accurate characterization at nanoscale. Single particle inductively coupled plasma mass spectrometry (spICP-MS) is a prolific method in this field and outperforms other conventional techniques used, such as dynamic light scattering (DLS), which was used to acquire data on Au nanoparticles size distribution and measurement of their concentration with the highest possible precision. K-means clustering was used to process the raw data for the improved discrimination of the signal, removing the background noise and quantitatively resolve different size groups with a resolution lower than 2% by mass and 20 nm by nanoparticle
[15].
Current efforts are limited though in the information extraction from nanoparticles with circular crosssections and spherical shapes. Machine learning can unlock maximum potential when deployed on imaging and other shape-specific characterization techniques when more geometries can be identified and conditional functions assess critical materials structure features, i.e., inner and outer diameter, length, curvature, deviation from circular shape, etc. Moreover, current analysis is often limited to 2D information, which in case spherical particles can be extrapolated to 3D information. However, 3D information is often needed to describe nanomaterials with complex geometries.
Horwath et al., used computer vision and ETEM images to improve the image segmentation capabilities in an automated way and deploy a model suitable for nanomaterial detection which is limited only by the resolution of the characterization technique and not by the machine learning model ability to identify and localize features. In their approach to this common segmentation task, deep learning regularization of the input datasets was shown to be the key factor to establish an effective model rather than developing an architecture with convolutional neural networks (CNN). This is connected to the selection of boundary pixels, which requires the sematic information distributed in the image, which may be lost by increasing the variance of the intensity histograms
[16]. The knowledge of data features, as well as hypothesis-oriented design of the predictive model, were shown to address common challenges in materials science informatics, while class imbalance, overfitting, and accessibility to sufficient amounts of data limit the prediction efficiency. The proposed architecture is quite simple, consisting of a single convolutional layer, which satisfied an efficient computational performance in regard to accuracy and the ability to quantify results, while the model kept the prediction standards close to the state-of-the-art (SoA), by adopting suitable metrics that evaluate the limitations induced by several descriptors. Interoperability of the developed model is also satisfied by the fact that computer vision models are not limited by the acquisition characterization technique, but by the instrument resolution limits.
Similarly, Ilett et al., introduced the ilastik tool for object classification, which can be tailored to a wide range of parameters that can be used to deconvolute particles by microscopy images
[17]. This tool includes a function of detection of different particles even in agglomeration state and corresponding quantification features which could be used to monitor nanomaterials stability and obtain more insights in the interpotential dynamics and the tendency to agglomerate, which is a critical aspect in nanomanufacturing. Thus, machine learning was introduced to provide in that case the capacity to overcome the time-consuming manual process for identifying the agglomeration tendency and projected shape of agglomerates, as well as to overcome the limitations of DLS characterization, which as discussed above is prone to missing the agglomeration events since a circular shape of nanomaterials is assumed in all cases. In addition, it has been argued that DLS information on colloidal stability can be limited since the suspensions studied with this method are stable colloids.
In summary, with the exploitation of high-resolution computer vision and other machine learning models, it is possible to detect the production footprint in regards to the (nano-)materials properties and provide the actual characterization mapping over a wide range of instances. By using previous knowledge in terms of characterization output and images it is possible to automatically extract high number of materials parameters and high-quality data, which can be used for training to extrapolate the prediction to multiple material features.
Several approaches include randomization of the parameters of the process to achieve the suitable material properties, which requires access to a large variety and unbiased data. Usually, particle swarm optimization (PSO) and GA have been shown to efficiently support the design of the experiments and support the optimization tasks using the minimum number of experiments with this data-driven approach. Thus, machine learning can efficiently help scientists to obtain high-throughput information from characterization techniques and to utilize previous raw research or metadata to reveal unforeseen physical/chemical properties, topologies, and stability (agglomeration tendency), high quality features, and the inner structure of materials with high precision
[18], which enhances the innovation potential.
3. Optimizing Formulations and Composition in Nanocomposite Materials Engineering and Additive Manufacturing to Improve Performance and Support Applications
Additive manufacturing (AM) has raised a lot of attention in recent years due to its prominent technological role in shaping and (nano-)manufacturing complex metal-, polymer-based, and nano-reinforced components to serve demanding engineering applications with the unmatched benefits of providing zero waste “bottom-up” scalable manufacturing solutions for complex architectures. AM can be realized in desktop microfactories with many benefits, such as reduced needs for additional tools to be used, functional parts can be delivered without the need to assembly, and with minimum requirement of down time, products can be customized and meet the societal needs
[19][20][21][22][23]. The role of nanotechnology in this field lies in customizing and engineering the material and nanocomposite properties by the utilization of nanoreinforcements, such as graphene and nanofibers, with high reactivity, surface area, conductivity, sensing capabilities, and modified surface chemistry, which can introduce multiple functionalities at the multiscale level
[20].
One main research priority remains to replicate the properties of conventionally used materials with more sustainable material options. Current established research has shown how carbon nanomaterials (graphene, graphene oxide, CNTs, carbon black nanoparticles) can be used to formulate AM nanocomposites with improved tensile properties, thermal stability, conductivity, and radiofrequency-induced heating capability, and to broaden the application field of the developed materials. Moreover, it is strongly regarded by the community of AM that the vast amounts of domestic waste could be managed by efficient and sustainable upcycling strategies to lead the manufacturing of products with added value by incorporating nanotechnology to the feedstock formulations
[24]. For instance, polymer nanocomposites have the potential to support adequate performance in many fields of application with specific needs for wettability, elasticity, durability, and conductance. AM process may be hindered though, due to the rheological behavior/clogging/homogeneity of mixing at high mixing ratios of nanomaterials, which may be required to reach, e.g., a workable conductivity, stiffness, etc.
[25].
AM digital character can lead the digital nanomanufacturing era empowered by AI. The AM workflow involves the materials/object virtual and reverse design (layers structure, composition, nano-fillers, architecture, aesthetics) from the smallest volume of reference in the form of droplets, powder, wire to fabricate flexible and lightweight components. The AM process digitalized character offers the opportunity to fully digitally controlled operations that can be advanced and accelerated with the use of smart, high-precision, data-driven tools and metrology to introduce online and real-time process and on-demand adaptation capabilities, and tackle current variation in product quality, thus increasing confidence and reducing unpredictability concerns
[21][22][26].
In order to bridge the gap of current manufacturing technologies and smart factories, AM infrastructures should enable to go beyond the current SoA; (i) regarding the dependence solely on feedstock screening operations for process assessment, and (ii) the “open-loop” system operation and introduction of sensor systems for online feedback measurement to enable smart process self-adaptation (control, quality assessment, calibration, monitoring) supported by AI. In this direction, Banadaki et al., proposed correlations feedstock-sensor data-process parameters-characterization data by establishing a schema for interactive and scalable machine learning development to support the reliability of the process in a cost-effective manner, and to improve the end-products from AM
[27]. In this scope, it is essential to incorporate (open and interoperable) ontologies in order to facilitate the knowledge management of AM digitalized and structured data and support the AI to reveal and discover reasonable and currently unseen/new knowledge
[28]. Already relevant progress has been realized by Granta additive manufacturing, Senvol Database, and Senvol ML, which is expected to evolve and flourish in the next decade.
Amongst the applications of AM, the ambition and the challenges that are expected to be confronted in the next years are related to bio-based applications from the fabrication of drug delivery devices/sensors to the manufacturing of polymer-based synthetic tissues and organs reinforced with nanomaterials. A strategic benefit in this direction is the ability of AM techniques to combine precursors with different properties to represent the human physiology, which often requires hard and soft segments (both cellular and acellular) assembly to sustain the stresses induced in human body environment. Several techniques have been adopted in this field, including selective laser sintering, stereolithography, fused deposition modelling, and bioprinting (extrusion-assisted, laser, droplet, ink)
[20]. Engineered bio-compatible materials for slow-release medicines and tissue engineering can perform occasionally even better than natural materials, and thus AM can have a great impact in the field by providing easily accessible, low cost, and faster address the market needs for health care technologies across the globe
[24]. The next generation in miniaturized sensor devices has been realized in the AM industry, with recent trends in developing a lab-on-a-chip (LOC), to track (bio)chemical processes in the clinical diagnostics sector and perform fast and easy assessment. Other benefits of this technology include the reconfigurability, modularity, portability, compactness, low power consumption and electronic noise, embedded computing capability, and the highly localized topology, which promises the analysis of a specified point of care with improved sensitivity and minimum resources. Engineering the LOC devices enables the control of microfluidics, which is key for the electrokinetic or micropumping control of fluidic transportation and efficient separation when examining liquid samples with high precision in several conditions; when the flow is continuous, or droplet-wise sampled. Thus, this is an important technology for biomedicine advancement following the automatic continuous tracking, which can be used for online feedback and adjustment of multi-material AM processes and provide tailor-made microfluidic micro-electromechanical systems (MEMS)
[29].
In a different bio-based application, Zafeiris et al., used AM and 3D CAD representations to fabricate hydroxyapatite/chitosan scaffolds with controlled porous network to facilitate the cell attachment and cultivation requirements to enable the functionality of tissue growth and the proper transfer capability for nutrients to reach the cell cultures. The direct ink writing method was used to enable the realization of the regeneration of bone tissue and successfully develop scaffolds mimicking a proper extracellular matrix
[19]. Another study case of biomimetism with the use of machine learning in AM contains rapid design solutions among a vast design space in the field of biomimetic design. As an input simulated metamaterials and stored data have been used by Gu et al., in order to develop a self-learning algorithm to identify the best candidates for the production of high-performance hierarchical materials with highly defined microstructural patterns. Compared to conventionally used FEM for the prediction of mechanical properties it was shown that machine learning can induce a shortcut to long computational requirements from 5 days to less than 10 h and up to 30 s for training of the algorithms, which are able to screen at high-throughput (billions of designs per hour)
[30].
In addition, and beyond healthcare industries, AM can serve a range of applications in automotive, energy, aerospace, due to its unique engineering features, with one main identified bottleneck regarding the surface integrity. An intelligent and digitally controlled methodology was developed by Li et al., who introduced a sensor-based (accelerometers, infrared temperature, thermocouples), data-driven ensemble model for the surface roughness prediction
[31]. With regards to the wider industrial exploitation of AM, product quality assurance, processing defects, access to materials libraries, and design for AM remain limiting factors. Machine Learning as standalone or combined with physics modelling can be the catalyst in this direction, especially by the scope of discovering/generating and predicting the performance of AM metamaterials, namely, elastic/shear modulus and Poisson’s ratio in an automated manner based on the desired properties (virtual experiments). What is more, machine learning can assist the on-demand and reverse process adaptation to improve quality control during manufacturing, assist the precise control of printing topology, including melt-pool geometry for DED processes, and enhance the feedstock screening at a pre-manufacturing planning level. Another important quality parameter of the engineered AM products concerns the mesoscale porosity, which is process dependent and highly related to the mechanical performance. A nullified porosity is the ultimate objective in metal AM to achieve full dense structures in order to obtain adequate fatigue properties, while in biological or energy absorption applications a controlled porous architecture is required. A machine learning paradigm for the latter case is to correlate porosity with manufacturing parameters using neural networks (NN) or an adaptive-network-based fuzzy inference system
[32].
The role of machine intelligence has risen to support the advancement of AM processes and materials evolution dedicated also to different sectors. By using data-driven predictive modelling, it has been possible to identify and correlate the microstructure of materials to thermal stresses for metal components. CNNs have been successfully utilized to establish such structure–property relations. Bhutada et al., demonstrated the identification capability amongst six different microstructures by utilizing feature extraction via k-means clustering on images and subsequently image vision models were established to classify each microstructure with over 97% accuracy. The correlation with the principal, hydrostatic, and other stress tensors was conducted via regression among the six microstructures, which enabled quantitative comparisons of internal stresses of model-based predictions and experimental values with high confidence. Another outcome regarded the NN model representation as a reduced order microstructure, which can be used in conjunction with FEM to successfully predict thermal stress on an 3D printed components
[33].
AI has also found application in online and real-time video monitoring (single shot detector) of the fused filament fabrication (FFF) process. The detection of stringing defects generated during printing was enabled by the development of deep convolutional NN, which established the connection of the defect patterns with the printing parameters without a need for machine or camera calibration, in order to enable the fast online adaptation of the process. Successful case studies in the powder spreading stage via computer vision have been published also on the automated classification of unwanted phenomena and anomalies (splatter defects, delamination) for the live monitoring of power bed process
[26][34]. Parameters, such as the size of the printed device, distance to the print area, and camera resolution, have an effect on the precision of the computer vision, and by using AI it is possible to adapt the parameters in order to eliminate print errors
[34].
Laser-based AM is another (nano-)manufacturing method that is competitive due to the capacity of rapid prototyping at reduced cost and the ability to work on flexible and curved substrates, providing the advantage of forming wearable electronics with the involvement of nanotechnology. Especially for metal AM, nanomaterials can tackle the challenge of the feedstock supply and quality requirements regarding the size distribution and uniformity, but size and shape still affect the success of nanomanufacturing
[35]. Real-time and intelligent control of the AM process is essential in this scope to minimize the defects created during printing and save energy by mitigating the need for post-annealing treatments after fabrication, while the throughput is increased
[26][35]. Another case is the utilization of a focused laser beam for the optical printing of nanomaterials dispersed in proper solvents on different surfaces
[35].
Self-organizing maps (SOMs) represent another machine learning method which has been used for assessing the AM profiling accuracy compared to digital CAD models to establish relations with the parameters of processing such as extruder temperature and infill percentage that may lead to deviations. Such a causal relationship establishment was outshined by the authors as an imminent measure to efficiently adapt the process and improve the quality. The universality of this method is connected to the commonality of the features correlated, which can bring intelligence and new design rules to all AM methods that confront such qualitative challenges
[36].
It is clear that machine learning holds a lot of promise for the establishment of data-driven supervising tools in a closed-loop AM process. Especially, SoA powder bed printers lead to variation in the mechanical properties from experiment to experiment. Automation in defect recognition via image analysis with NN has been outlined by Razaviarab et al., as a gamechanger in smart AM automated adjustment of processing parameters, which is expected to limit the wastes produced and reduce the effort and energy wasted
[26][37].
The proceedings in the field of AM seem to be encouraging regarding the adoption of attractive and intelligent tools in the production lines. The main concern remains concerning the willingness of AM machine manufacturers and software developers to open channels and enable deep adaptations up to software/firmware since intelligent algorithms have to be incorporated directly to the digital ecosystem to enable process adaptation and real-time control, as well as product optimization.
As it has been shown already, ANNs dominate in a considerable amount of publications due to their unique ability to solve complex real-life engineering problems beyond AM, by the utilization of previously measured characterization data and due to the ability to achieve significant time savings
[38]. The interpretability of ANNs in decision making has been popular in data science, commonly by solving the inverse problem, where the experimental data are correlated to the microstructure and functionalities of nanomaterials to gain new insights by using first-principles-based tools to accelerate the time-to-market of novel nanomaterials/nanocomposite materials/systems
[39]. The attractiveness of this method is in its simplicity based on the previous experience based on characterization results, without the consideration of a sound theory, which cannot be substituted by machine learning, but used complementarily
[40]. For instance, an ANN predictive model was developed to predict the pool boiling heat transfer coefficient (HTC) and design refrigerant-based nanofluids containing CNTs, TιO
2, nanodiamond, and Cu with correlation coefficient (
R2) of 0.9948 and overall mean square error of 0.0153. In this case, data were mined from literature papers resulting in a dataset unfolding 1342 different experiments, and seven descriptors were included, namely heat flux, saturation pressure, base fluid thermal conductivity, nanoparticle thermal conductivity, nanoparticle concentration, lubricant concentration, and nanoparticle size. The pool boiling HTC of refrigerant-based nanofluids was determined over wide ranges of operating conditions and the best functionality was demonstrated by using a simple one hidden layer architecture with 19 neurons, while tansig (hidden layer) and purelin (output layer) were used as transfer functions
[41].
In another study, Demirbay et al., used a multi-layered feed-forward neural network (FFNN) to predict electrical conductivity in polyesterene (PS) doped film coatings reinforced with multi-walled CNTs (MWCNT) to improve electrical conductivity. The dataset describing the formulation features contained the concentration of surfactant, initiator, MWCNT, molecular weight, particle sizes of PS latex. Training regulation was performed using a Bayesian backpropagation algorithm. In this case, the ideal topology of the FFNN was evaluated using several metrics, such as mean squared error (MSE) and the determination of coefficient (
R2). The optimal architecture of the network was consisted of eight nodes in the hidden layer using a log-sigmoid transfer function for the training, which was confirmed by the
R2 value in the training phase, and with the MSE value in both training and testing. The relative importance-based sensitivity analysis also showed that the concentration of MWCNTs influenced to a greater extent the conductivity results. Finally, a mathematic model was used to introduce training weights and reduce bias in the predicted results, which were in agreement with the experimental values
[38]. Therefore, an explicit mathematical function was established for the prediction of the conductivity using weights and bias values.
Ashrafi et al., in their study of concrete nanoformulations used a feed-forward back-propagation network with a specific architecture containing 22 nodes and one hidden layer to acquire the force–deflection curve and the 28-day compressive strength. The results were in accordance with the experimental output, while two additional methods were proven to provide even better prediction efficiency, namely standard deviation normalization and the Levenberg–Marquardt algorithm
[42].
In the study of Huang et al., the interesting effects of CNTs reinforcement of cement composites were investigated. More specifically, flexural and compressive strength were predicted using ANN and support vector machine (SVM) models, which were trained and tested on literature data in a sample of 114 experiments. Several aspects of modelling, such as the size, the quality of the dataset, experimental factors, and undefined parameters, strongly influence the mechanical properties. All parameters seem to lead in deviations of the predicted and actual values. Among the parameters used in this investigation the authors revealed that the length of CNTs has the highest impact on the output values of the compressive strength. Regarding the flexural strength, curing temperature was shown to be the most influential parameter. Both outcomes support the formulation design and the selection of nanomaterials to achieve the desirable optimization of the mechanical properties. More insights about manufacturing, namely sonication energy and time, additive, curation humidity, and other experimental descriptors, may enhance the predictive ability of the established model
[43].
Another demonstration of ANN capability was used to utilize 75 different material coefficients to model the microhardness of metal-ceramic nanocomposite coatings in order to select the proper combination of non-investigated experimental parameters (factor model—limited by the variation range of the coefficients) and deliver the desired properties with high accuracy. This model contributed to the data-driven optimization of the nanogranular structure of a wide variety of nanocomposites, including FeCoZr–Al
2O
3, Fe–Al
2O
3, Co–Al
2O
3, Ca–SiO
2, Co–CaF
2, Fe–SiO
2, and Co–MgO
2 [40].
The main argument in the establishment of structure–property relations with NN architectures is that the number of layers, number of nodes, and the type of the network (recurrent, convolutional, feed-forward, etc.) may not be generalizable. For instance, when studying the mechanical properties of nanoreinforced composites with CNTs, titanium nanotubes, or Ag wires, factors, such as the different surface modification, length, weight concentration, catalysts used in the production phase, diameter, surface energy, mechanical properties, and conductivity, may require a larger number of hidden layers and more complex architectures to legitimate the relation of nanomaterial used to the composite performance or target property. Currently, in the framework of scientific research and publications, this is unsustainable as it requires severe investment and resources to perform all experimental procedures in regard to synthesis and testing. The research community can assist the needs for more systematic and structured information by sharing their data to open databases and facilitate the development of usable models.
Currently, it seems that data sharing is the resource that is actually reusable, rather than the actual machine learning model and this is problematic due to the fact the resources and computational energy spend in training cannot be reused, which also has an environmental impact. The inverse condition was thought to be the norm since the main discussion has concerned the generalizability and universality of models; the open models are as generalized and as fast as possible, providing a framework of assessment tools for knowledge exploration, as well as enhanced and decentralized decision-making. However, cooperation and sharing are required to realize actual progress in this direction.
This entry is adapted from the peer-reviewed paper 10.3390/nano12152646