You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Submitted Successfully!
Thank you for your contribution! You can also upload a video entry or images related to this topic. For video creation, please contact our Academic Video Service.
Version Summary Created by Modification Content Size Created at Operation
1 Antonio IULA -- 4636 2022-06-20 11:17:37 |
2 format corrected. Beatrix Zheng + 339 word(s) 4975 2022-06-20 11:53:55 |

Video Upload Options

We provide professional Academic Video Service to translate complex research into visually appealing presentations. Would you like to try it?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Iula, A.;  Micucci, M. Machine learning in US Medical Diagnostics. Encyclopedia. Available online: https://encyclopedia.pub/entry/24212 (accessed on 15 December 2025).
Iula A,  Micucci M. Machine learning in US Medical Diagnostics. Encyclopedia. Available at: https://encyclopedia.pub/entry/24212. Accessed December 15, 2025.
Iula, Antonio, Monica Micucci. "Machine learning in US Medical Diagnostics" Encyclopedia, https://encyclopedia.pub/entry/24212 (accessed December 15, 2025).
Iula, A., & Micucci, M. (2022, June 20). Machine learning in US Medical Diagnostics. In Encyclopedia. https://encyclopedia.pub/entry/24212
Iula, Antonio and Monica Micucci. "Machine learning in US Medical Diagnostics." Encyclopedia. Web. 20 June, 2022.
Machine learning in US Medical Diagnostics
Edit

Machine learning (ML)  techniques have played a fundamental role in the analysis of US medical images in order to improve the reliability of diagnosis that is often compromised by the relatively poor quality of images due to the presence of noise and acquisition errors. Furthermore, ML techniques reduce operator-dependence, standardize the image interpretation, provide stable results and the capability to make rapid decisions, and relieve the heavy work of radiologists.

machine learning deep learning ultrasound imaging medical diagnostics

1. Breast

Breast cancer is a disease that represents one of the principal causes of cancer deaths for women and this number is increasing. The probability that a woman will die from a breast tumor is about 1 in 39. Only 10% of cases are detected at the initial stages. Breast cancer can begin in different parts of the breast which is made up of lobules, ducts, and stromal tissues. Most breast cancers begin in the cells that line the ducts, while some begin in the cells that line the lobules and a small number begin in the other tissues [1]. Breast cancer manifests itself mainly through a breast nodule or thickening that feels different from the surrounding tissue, lymph node enlargement, nipple discharge, a retracted nipple, or persistent tenderness of the breast.
A successful diagnosis in the early stages of breast cancer makes better treatment possible with increase in the probability of the person’s survival [2]. Furthermore, the cost of breast cancer treatment is high. For such reasons, in recent years, several breast diagnostic approaches have been investigated, such as mammography, magnetic resonance imaging, computerized tomography, biopsy, and ultrasound imaging. The latter, in the last few years, has started to become an integral part of the characterization of breast masses because of the advantages previously described. In addition, compared to mammography, ultrasound is the most accessible imaging modality, is age-independent [3] and allows the assessment of breast density that often represents a predictor of breast cancer risk evaluation and prevention. The breast density percentage is defined as the ratio between the area of the fibrograndular tissue and the total area of the breast. Breast ultrasound is also used to distinguish benign from malignant lesions.
For the purposes mentioned above, most of the techniques investigated are based on three principal issues, i.e., detection [4][5][6][7][8][9][10], segmentation [11][12][13][14][15][16][17][18][19][20][21][22], and classification [8][10][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39].
Detection is fundamental in ultrasound analysis because it provides support for segmentation and/or classification between malignant and benign tumors. In a recent study, Gao et al. [5] proposed a method for the recognition of breast ultrasound nodules with low labeled data. Nodule detection was achieved by employing the faster region-based CNN. Benign and malignant nodules were classified through a semi-supervised classifier, based on the mean teacher model, trained on a small amount of labeled data. The results demonstrated that the SSL enabled performances comparable to those obtained with SL trained on a large number of data to be achieved.
Segmentation [11] has an important role in the clinical diagnosis of breast cancer due to the capability to discriminate different functional tissues, providing valuable references for image interpretation, tumor localization, and breast cancer diagnosis. A segmentation approach that combines fuzzy logic and deep learning was suggested by Badawy et al. [12] for automatic semantic segmentation of tumors in breast ultrasound images. The proposed scheme is based on two steps: the first consists of preprocessing based on a fuzzy intensification operator and the second consists of semantic segmentation based on CNN, based on experimenting with eight known models. It is applied using different modes: batch and one-by-one image-processing. The results demonstrated that fuzzy preprocessing was able to enhance the automatic semantic segmentation for each evaluated metric, but only in the case of batch processing. Another automatic semantic segmentation approach was proposed by Huang et al. [14]. In this approach, BUS images are first preprocessed using wavelet features; then, the augmented images are segmented through a fuzzy fully convolutional network, and, finally, an accurately fine-tuning post-processing based on breast anatomy constraints through conditional random fields (CRFs) is performed. The experimental results showed that fuzzy FCN provided better performances than non-fuzzy FCN, both in terms of robustness and accuracy; moreover, its performances were better than all the other methods used for comparison and remained strong when small data sets were used. Ilesanmi et al. [15] used contrast-limited adaptive histogram equalization (CLAHE) to improve image quality. Semantic segmentation was performed through a variant of UNET, named VEU-NET, based on a variant enhanced (VE) block, which encoded the preprocessed image, and concatenated convolutions that produced the segmentation mask. The results indicated that the VEU-Net produced better segmentation than the other classic CNN methods that were tested for comparison. An approach based on the integration of deep learning with visual saliency for breast tumor segmentation was proposed by Vakanski et al. [18]. Attention blocks were introduced into a U-Net architecture and feature representations which prioritized spatial regions with high saliency levels were learned. The results demonstrated that the accuracy of tumor segmentation was better than for models without salient attention layers. An important merit of this investigation was the use of US images collected from different systems, which demonstrated the robustness of the technique.
Image classification is very important in medical diagnostics because it enables distinguishing lesions or benignant tumors from malignant ones and a particular type of tissue from others. Shia et al. [23] presented a method based on a transfer learning algorithm to recognize and classify benign and malignant breast tumors from B-mode images. Specifically, feature extraction was performed by employing a deep residual network model (ResNet-101). The features extracted were classified through the linear SVM with a sequential minimal optimization solver. The experimental results highlighted that the proposed method was able to improve the quality and efficacy of clinical diagnosis. Chen et al. [34] presented a contrast-enhanced ultrasound (CEUS) video classification model for breast cancer into benign and malignant types. The model was based on a 3D CNN with a temporal attention module (DKG-TAM) incorporating temporal domain knowledge and a channel attention module (DKG-CAM) that included feature-based domain knowledge. It was found that the incorporation of domain knowledge led to improvements in sensitivity. A study aimed at testing the capability of AutoML Vision, a highly automatic machine learning model, for breast lesion classification was presented by Wan et al. [36]. The performance of AutoML Vision was compared with traditional ML models with the most used classifiers (RF, KNN, LDA, LR, SVM and NB) and a CNN designed in a Tensorflow environment. The AutoML Vision performances were, on average, comparable to the others, demonstrating its reliability for clinical practice. Finally, Huo et al. [38] experimentally evaluated six machine learning models (LR, RF, extra trees, SWM, multilayer perceptron, and XG Boost) for differentiating between benign and malignant breast lesions using data from different sources. Two examples of the ultrasound depictions of malignant breast lesions are shown in Figure 1. The experimental results demonstrated that the LR model exhibited better diagnostic efficiency than the others and was also better than clinician diagnosis (see Table 1).
Figure 1. Ultrasound depictions of malignant breast lesions: (a) lesion characterized by irregular shape, calcification indicated by large arrow and not circumscribed margin by thin arrow (b) lesion characterized by the an oval shape, circumscribed margins indicated by thin arrow and enhancement posterior features by large arrow [38].
Table 1. Summary of Detection ML algorithms employed in analyzed studies with respect to organ investigated, diagnosis objective, dataset used, and main results achieved.
Ref. Organ Objective Technique Results Datasets
[40] Breast Recognition of Faster R-CNN Mean accuracy: 87% Public
    breast ultrasound for detection of Performances of 6746 and
    nodules with nodules and SSL SSL and SL 2220 nodules
    low labeled images for classification are comparable  
[41] Arteries Detection of Bi-GRU NN Mean accuracy: 80% Private
    end-diastolic trained by a Better accuracy 20 coronary
    frames in NIRS-IVUS segment of than expert analysts arteries
    images of 64 frames with Doppler criteria  
    coronary arteries      
[42] Heart Evaluation of CNN with Mean ROC AUC Public
    biomarkers residual Anemia: 80% 108521
    from connections and BNP: 84% echocardiogram
    echocardiogram spatio-temporal Troponin I: 75% studies
    videos convolutions for BUN: 71.5%  
      estimation of    
      biomarker values    
[43] Heart Extract information Texture-based features ROC AUC: 80% Public
    associated extracted with unsupervised Sensitivity: 86.4% 392 subjects
    with myocardial similarity networks Specificity: 83.3%  
    remodeling ML models (DT,RF,LR,NN) Prediction of  
    from still for prediction of myocardial fibrosis  
    ultrasound functional remodeling only from textures  
    images LR for predicting of ultrasound images  
      presence of fibrosis    
[44] Liver Detection of SSD and FPN to ROC AUC Public
    gallstones and classify gallstones with ResNet-50: 92% 89,000 images
    acute cholecystitis features extracted MobileNetV2: 94%  
    with still images by ResNet-50 and detect cholecystitis  
    for preliminary MobileNetV2 and gallstones with  
    diagnosis to classify acceptable discrimination  
      cholecystitis and speed  
[45] Fetus Gestational age and AlexNet variation Accuracy % Private
    automatic estimation for TC frames TC plane detection: 99% 5000 TC images
    from TC diameter extraction TC segmentation: 97.98%  
    as a POCUS solution FCN for TC Accurate GA  
      localization and estimation  
      measurment    
[46] Fetus Automatic LH-SVM Accuracy: 94.67% Private
    recognition SVM for learning of Average precision: 94.25% 943 standard
    and classification features extracted Average recall rate: 93.88% planes
    of FFUSP by LBP and HOG Average F1 score: 94.88% 424 nasolabial
    for diagnosis   Effective prediction coronal planes
    of cardiac   and classification 50 nonstandard
    conditions   of FFUSP planes
[47] Lungs Assist diagnosis Pre-trained ResNet50 Average F1-score Public
    of Covid19 Fully connected layer Bal dataset: 93.5% 3909 images
    on LUS images for feature extraction Unbal dataset: 95.3%  
    of FFUSP Global average pooling Improves performances  
      for features classification. in radiologists’ diagnosis  

2. Arteries

Another major cause of death in the world is represented by cardiovascular diseases (CVD), caused principally by a pathological condition called atherosclerosis, which is characterized by alterations of artery walls that have lost their elasticity because of the accumulation of calcium, cholesterol, or inflammatory cells. It is the principal cause of ictus and infarct. Early detection of plaques in the arteries has a fundamental role in the prevention of brain strokes. The imaging modality based on ultrasound represents a useful method for the analysis of carotid diseases through visualization and interpretation of carotid plaques because a correct characterization of this disease is fundamental to identifying plaques vulnerable to surgery. A reliable and useful indicator of atherosclerosis is the so-called intima-media (IM) thickness, defined as the distance from the lumen-intima (LI) to the media-adventitia (MA) interface. Most studies have been devoted to the improvement of early atherosclerosis diagnosis; in this respect, three main issues are considered: detection [48][49][50][51][52][53][54], segmentation [55][56][57][58][59][60][61][62], and classification [40][63][64][65][66][67][68][69][70][71][72][73][74].
As far as detection is concerned, Bajaj et al. [48] designed a novel deep-learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound (IVUS) images. Near-infrared spectroscopy(NIRS)-IVUS were collected from 20 coronary arteries and co-registered with the concurrent electrocardiographic (ECG)-signal for identification of end-diastolic frames. A bidirectional-gated recurrent unit (Bi-GRU) neural network was trained by a segment of 64 frames, which incorporated at least one cardiac cycle, and then the test set was processed to identify the end-diastolic frames. The performances of the proposed method demonstrated higher accuracy than expert analysts and conventional image-based (CIB) methodologies.
Two recent segmentation approaches based on DL have been proposed by Blanco et al. [58] and Zhou et al. [59]. The first method [58] employs small datasets for algorithm training. Specifically, plaques from 2D carotid B-mode images, are trained on three small databases, and are segmented through a UNet++ ensemble algorithm, which uses eight individual UNet++ networks with different backbones and architectures in the encoder. Good segmentation accuracy was achieved for different datasets without retraining. The second method [59] involves the concatenation of a multi-frame convolutional neural network (MFCNN), which exploits adjacency information present in longitudinally neighboring IVUS frames to deliver a preliminary segmentation, followed by a Gaussian process (GP) regressor to construct the final lumen and vessel contours by filtering high-dimensional noise. The results obtained with the model developed demonstrated accurate segmentation in terms of image metrics, contour metrics, and clinically relevant variables, potentially enabling its use in clinical routine by reducing the costs involved in the manual management of IVUS datasets. Lo Vercio et al. [74] suggested an automatic detection method fundamentally based on two machine learning algorithms: SVM and RF. The first one is employed to detect lumen, media, and surrounding tissues through SVM algorithms, and the second one to detect different morphological structures and to modify the initial layer classification depending on the detected structures. Successively, the resulting classification maps are inserted into a segmentation method based on deformable contours to detect LI and MA interfaces. The main steps of LI and MA segmentation are described in Figure 2.
Figure 2. Main steps of LI and MA segmentation: (a) B-mode images,(b) edge map, (c) contour segmentation, (d) final segmentation. LI and MA are marked in red and green, respectively [74].
With respect to classification, Saba et al. [66] focused on the classification of plaque tissues by employing four ML systems, one transfer learning system, and one deep learning architecture with different layers. Two types of plaque characterization were used: an AI-based mean feature strength and a bispectrum analysis. The results demonstrated that the proposed method was able to accurately characterize symptomatic carotid plaques, clearly discriminating them from symptomatic ones. Another study on carotid diseases was published by Luo et al. [67] that proposed an innovative classification approach based on lower extremity arterial Doppler (LEAD) duplex carotid ultrasound studies. They developed a hierarchical deep learning model for the classification of aortoiliac, femoropopliteal, and trifurcation disease and an RF algorithm for the classification of the quantity of carotid stenosis from duplex carotid ultrasound studies. Then, an automated interpretation of the LEAD and carotid duplex ultrasound studies was developed through artificial intelligence. Successively, a statistical analysis was performed using a confusion matrix and the reliability of novel machine learning models in differentiating normal from diseased arterial systems was evaluated.

3. Heart

Echocardiography is one of the most employed diagnostic tests in cardiology, where heart images are created through Doppler ultrasound. It is routinely employed in the diagnosis, management, and follow-up of patients with any suspected or known heart disease.
The heart is a muscular organ that pumps blood through the body and is fundamentally divided into four different chambers: the upper left and right atria and the lower left and right ventricles. The heart activity can be divided into two principal phases: systole and diastole. During systole, the myocardium contracts, ejecting blood to the lungs (right ventricle) and the body (left ventricle). During diastole, the cardiac muscle dilates expanding the heart’s volume and causing blood to flow in. The heart has four valves, including the mitral valve that collapses the left atria and the left ventricle and plays a fundamental role by regulating the blood transition from atria to ventricle, opening up during the diastole, while during the systole the valve closes and prevents reflux towards the left atria. Echocardiography can provide information about different anatomical heart aspects including position, shape of the atrium and ventricles [75], and even other variables such as cardiac output, ejection fraction and diastolic function. In addition, echocardiography enables detection of a series of heart diseases, including cardiomyopathy, congenital heart diseases, aneurysm, and mitral valve diseases. However, one of the major issues in echocardiography is the difficulty of automatically classifying and identifying large databases of echocardiogram views in order to provide a diagnosis. The classification task is challenging because of several properties of echocardiograms, including the presence of noise, redundant information, acquisition errors, and the variability of different scans due to different acquisition techniques.
Several studies have been devoted to the automation of algorithms for the detection of anomalies and heart anatomy [76][77][78][79][80], and the classification of echocardiogram views to provide a full and reliable assessment of cardiac functionality improving diagnosis accuracy [81][82].
As far as detection is concerned, an advanced method for the evaluation of several biomarkers from echocardiogram videos based on DL has been developed by Hughes et al. [76]. The method, named EchoNet-Labs, is a CNN with residual connections and spatio-temporal convolutions that provides a beat-by-beat estimate for biomarker values. Experimental results have demonstrated high accuracy in detecting abnormal values of hemoglobin, troponin I, and other proteins, and better performance compared to models based on traditional risk factors. A detection method based on radiomics-based texture analysis and supervised learning was proposed by Kagiyama et al. [77], who designed a low-cost texture-based pipeline for the prediction of fibrosis and myocardial tissue remodeling. The first part of the method consists of the extraction of 328 texture-based features of the myocardium from ultrasound images and exploration of the phenotypes of myocardial textures through unsupervised similarity networks. The second part involves the employment of supervised machine learning models (decision trees, RF, logistic regression models, neural network) for the prediction of functional left ventricular remodeling, while, in the third part, supervised models (logistic regression models) for predicting the presence of myocardial fibrosis are employed. Figure 3 shows a comparison of two myocardial fibrosis predictions from ultrasound and magnetic resonance images.
Figure 3. Prognosis of myocardial fibrosis. Three ultrasound renderings and the corresponding myocardial textures [77].
A classification deep learning approach was developed by Vaseli et al. [81]. They defined a method for obtaining a lightweight deep learning model for the classification of 12 standard echography views, by employing a large echography dataset. For this purpose, three different teacher networks are implemented, each of which consists of a CNN module and fully-connected (FC) module, where the first module is based on one of the three advanced deep learning architectures, i.e, VGG-16, DenseNet, and Resnet. A dataset of 16,612 echo cines obtained from 3151 unique patients across several ultrasound imaging machines was employed for the development and evaluation of the networks. The proposed models were shown to be lightweight and faster than state-of-the-art huge deep models, and to be suitable for POCUS diagnosis.

4. Liver

Liver disease is one of the principal causes of death worldwide and comprises a wide range of diseases with varied or unknown origins. In 2017, about 1.32 million deaths worldwide were due to cirrhosis. Furthermore, liver cancer represents the fifth most common cancer and the second cause of death for cancer according to the World Health Organization (WHO). Studied pathologies can be summarized as:
  • focal liver lesions, solid formations that can be benign or malignant,
  • liver fibrosis, excessive accumulation of extracellular matrix proteins, such as collagen,
  • fatty liver or liver steatosis, conditions based on the accumulation of excess fat in the liver,
  • liver tumors.
A number of studies have sought to develop automated algorithms for detection [83][84][85][86][87], segmentation [86], and classification, [86][88][89][90][91][92][93][94][95][96][97][98][99][100] of the diseases described above.
Yu et al. [83] developed a machine learning system to detect and localize gallstones and to detect acute cholecystitis using still images for preliminary rapid and low cost diagnoses. A single-shot multibox detector (SSD) and a feature pyramid network (FPN) were used to classify and localize objects using image features extracted by ResNet-50 for gallstones and MobileNet V2 to classify cholecystitis. The deep learning models were pretrained using public datasets. The experimental results demonstrated the capability of the proposed system to detect cholecystitis and gallstones with acceptable discrimination and speed and its suitability for point-of-care ultrasound (POCUS).
A recent study by Cha et al. [101] proposed a deep learning model aimed at automatically quantifying the hepatorenal index (HRI) for evaluation of ultrasound fatty liver, in order to overcome limitations due to interobserver and intraobserver variability. They developed an organ segmentation based on a deep convolutional neural network (DCNN) with Gaussian mixture modeling for automated quantification of the hepatorenal index (HRI) by employing B-mode ultrasound abdominal images. Interobserver agreement for the measured brightness of liver, kidney, and calculated HRI were analyzed between two board-certified radiologists and DCNN using intraclass correlation coefficients. The automatic quantification of HRI through DCNN results were found to be similar to those obtained by expert radiologists.
Regarding classification, Wang et al. [89] proposed a method to differentiate malignant from benign focal liver lesions through two-dimensional shear-wave elastography (2D-SWE)-based ultrasomics (ultrasound-based radiomics). The ultrasomics technique was employed to extract from 2D-SWE images features that were used to define an ultrasomics score model, while SWE measurements and ultrasomics features were used to define a combined score model through an SVM algorithm. Good diagnostic accuracy for the combined score in differentiating malignant from benign focal liver lesions was demonstrated. The authors highlighted, however, that, to achieve more reliable results, a higher number of cases would be required to better train the ML model. An alternative approach based on ultrasomics was proposed by Peng et al. [90] who concentrated on the differentiation of infected focal lesions from malignant mimickers. In particular, they defined an ultrasomics model based on machine learning methods with ultrasomics features extracted from grayscale images, and dimensionality reduction methods and classifiers employed to carry out feature selection and predictive modeling. The experimental results demonstrated the usefulness of ultrasomics in differentiating focal liver lesions from malignant mimickers. An alternative approach focusing on ultrasound SWE was proposed by Brattain et al. [92], who developed an automated method for the classification of liver fibrosis stages. This method was based on the integration of three modules for the evaluation of SWE image quality, selection of a region of interest, and use of machine learning-based (SVM, RF, CNN and FCNN) multi-image SWE classification for fibrosis stage ≥ F2. The performance of the system was compared with manual methods, showing that the proposed method improved classification accuracy. A study focused on liver steatosis was published by Neogi et al. [98]. They presented a novel set of features that exploited the anisotropy of liver texture. The features were obtained using a gray level difference histogram, pair correlation function, probabilistic local directionality statistics, and randomness of texture. Three datasets that included anisotropy features were employed for the classification of images using five classifiers: MLP, PNN, LVQ, SVM, Bayesian. The best results were achieved with PNN and anisotropy features.

5. Fetus

Ultrasound imaging was introduced into the field of obstetrics by Donald et al. [41], and, since then, it has become the most commonly used imaging modality for investigating several factors related to fetal diagnosis, such as information on fetal biometric measurements, including head and abdominal circumferences, biparietal diameter and information on fetal cardiac activity. Several scientific studies have been devoted to advancement of the quality of prenatal diagnoses by focusing on three main issues: detection of anomalies, fetal measurements, scanning planes and heartbeat [42][43][44][45][46][47][102][103], segmentation of fetal anatomy in ultrasound images and videos [42][103][104][105][106][107][108][109] and classification of fetal standard planes, congenital anomalies, biometric measures, and fetal facial expressions [42][43][102][107][109][110][111][112][113][114][115].
A detection approach based on DL was proposed by Maraci et al. [42]. They designed a method for point-of-care ultrasound estimation of fetal gestational age (GA) from the trans-cerebellar (TC) diameter. In the first step, TC plane frames are extracted from a short ultrasound video using a standard CNN based on a variation of the AlexNet architecture. Then, an FCN is employed to localize TC structure and to perform TC diameter estimation. GA is finally achieved through a standard equation. A good agreement was found between the automatic and manual estimation of GA. A recent ML detection method has been published by Wang et al. [43], who focused on the accurate identification of the fetal facial ultrasound standard plane (FFUSP), which has a significant role in facial deformity detection and disease screening, such as cleft lip and palate detection. The authors proposed an LH-SVM texture feature fusion method for automatic recognition and classification of FFUSP. Texture features were extracted from US images through a local binary pattern (LBP) and a histogram of oriented granted (HOG); successively, features were fused and SVM was employed for predictive classification. The performances obtained demonstrated that the proposed method was able to effectively predict and classify FFUSP.
With respect to segmentation, Dozen et al. [104] proposed a novel segmentation method called cropping-segmentation-calibration (CSC) of the ventricular septum in fetal cardiac ultrasound videos. This method was based on time-series information of videos and specific information for U-Net output calibration, obtained by cropping the original frame. The experimental results demonstrated a clear improvement in performance with respect to general segmentation methods, such as DeepLab v3+ and U-net.
A novel model-agnostic DL method (MFCY) was proposed by Shozu et al. [105] in order to improve the segmentation performance of the thoracic wall in ultrasound videos. Three standard UNet (DeepLabV3+), pre-trained with the original sequence video and labels of thoracic wall (TW), thoracic cavity (TC) and whole thorax (WT), were used to perform a preliminary segmentation of the video sequence. Then a multi-frame method (MF) was used to extract predictions for each labeled target. Finally, a cylinder method (CY) integrated the three prediction labels for final segmentation. The results showed improvement in the segmentation performance of the thoracic wall in fetal ultrasound videos without altering the neural network structure.
Perez-Gonzalez et al. [106] presented a method, named probabilistic learning coherent point drift (PL-CPD), for automatic registration of real 3D ultrasound fetal brain volumes with a significant degree of occlusion artifacts, noise, and missing data. Different acquisition planes of the brain were preprocessed to extract confidence maps and texture features, which were used for segmentation purposes and to estimate probabilistic weights by means of random forest classification. Point clouds were finally registered using a variation of the coherent point drift (CPD) method that basically assigns probabilistic weights to the point cloud. The experimental results, although obtained from a relatively small dataset, demonstrated the high suitability of the proposed method for automatic registration of fetal head volumes.
A recent deep learning classification model was developed by Rasheed et al. [107] for automation of fetal head biometry by employing a live ultrasound feed. Initially, the headframes were classified through the CNN ALEXNET, obtained in this case from the ultrasound videos. The classified headframes were then validated through occipitofrontal diameter (OFD) measurement. Successively, the classified headframes were segmented by a UNET with mask and annotated images. Then, the least square ellipse (LSE) was employed to compute the biparietal diameter (BPD) and head circumference (HC). This approach enabled accurate computation of the gestational age with very reduced interaction of the sonographer with the system.

6. Lungs

Computed tomography (CT) is considered as the imaging gold standard for pulmonary disease due to its high reliability. However, CT presents a series of disadvantages because it is risky due to the presence of radiation, expensive and non-portable. A valid alternative is represented by lung ultrasound (LU), which is cheap, safe, portable, and is capable of generating medical images in real-time. LU has been used for many years for the evaluation of several lung diseases, including tumors [116][117], interstitial diseases [118][119], post-extubation distress [120], lung edemas [121], and subpleural lesions [122]. In very recent years, research activity into lung ultrasonography has been growing significantly due to the diffusion of the pandemic worldwide. In particular, in COVID-19 evaluation, the use of AI has assumed an increasingly important role concerning the analysis of images in order to make rapid decisions and relieve the heavy workload of radiologists.
AI techniques reduce operator-dependence, standardize the interpretation of images and provide stable results; they have been focused principally on COVID-19 syndrome detection [123][124][125][126][127][128], segmentation of lung regions [124][125][126][127][128][129], classification of lung diseases between COVID-19 positivity and COVID-19 negativity [124][125][126][127][128][130][131][132][133].
With respect to detection, Shang et al. [123] proposed a CAD system that consists of the feature extraction of LUs images through a residual network (ResNet) to assist radiologists in distinguishing COVID-19 syndrome from healthy and non-covid pneumonia. The architecture of the ResNet, pre-trained using ImageNet, was modified by adding a fully connected layer for feature extraction and a global average pooling for features classification. Then, the gradient-weighted class activation mapping (Grad-CAM) method was used to create an activation map that highlights the crucial areas to help radiologist visualization. The CAD system has proved capable of improving radiologists’ performance of COVID-19 diagnosis in experiments carried out.
An interesting segmentation method for accurate COVID-19 diagnosis has been proposed by Xue et al. [129]. The method is based on a dual-level supervised multiple instances learning module (DSA-MIL) and can predict patient severity from heterogeneous LUS data of multiple lung zones. An original modality alignment contrastive learning module (MA-CLR) is proposed for the combination of LUS data and clinical information. Nonlinear mapping was trained through a staged representation transfer (SRT) strategy. This method demonstrated great potential in real clinical practice for COVID-19 patients, particularly for pregnant women and children.
A classification deep learning procedure was proposed by Tsai et al. [130] who defined a standardized protocol combined with a deep learning model based on a spatial transformer network for automatic pleural effusion classification. Then, supervised and weakly supervised approaches, based on frame and video ground truth labels, respectively, were used for training deep learning models. The method was compared with expert clinical image interpretation with similar accuracy obtained for both methods, which brings closer the possibility of achieving the automatic, efficient and reliable diagnosis of lung diseases.

7. Other Organs

Machine learning ultrasound is being successfully applied to a number of other organs including:
  • Prostate [134][135][136]: research activity has mainly focused on prostate segmentation on ultrasound images, fundamental in biopsy needle placement and radiotherapy treatment planning; it is quite challenging due to the relatively low quality of US images. In recent years, segmentation based on deep learning techniques has been widely developed due to several benefits compared to classical techniques which are difficult to apply in real-time image-guided interventions.
  • Thyroid [33][137][138][139][140][141][142][143][144][145][146][147][148][149][150][151]: the risk of malignancy of thyroid nodules can be evaluated on the basis of nodule ultrasonographic characteristics, such as echogenicity and calcification. Much activity has been devoted to automate thyroid detection through CAD systems, mainly based on CNN.
  • Kidneys [152][153][154][155][156][157][158][159][160][161][162][163][164][165]: US image-based diagnosis are widely used for the detection of kidney abnormalities including cysts and tumors. For the early diagnosis of kidney diseases, DNN and SVM are very often used as machine learning models for abnormality detection and classification
Figure 4 presents a resuming histogram where the frequency of application of the different ML techniques in the analysed period is reported for each analyzed organ. As can be seen, DL techniques based on CNN are clearly the most popular for almost all organs. In particular, for breast and liver, which are the most investigated organs, CNN is employed in about 63 and 50 percent of the cases, respectively. Only for arteries is a slight predominance of SVM methods observed.
Figure 4. Frequency of ML algorithms application across all organs.

References

  1. Gao, Y.; Liu, B.; Zhu, Y.; Chen, L.; Tan, M.; Xiao, X.; Yu, G.; Guo, Y. Detection and recognition of ultrasound breast nodules based on semi-supervised deep learning: A powerful alternative strategy. Quant. Imaging Med. Surg. 2021, 11, 2265–2278.
  2. Bajaj, R.; Huang, X.; Kilic, Y.; Jain, A.; Ramasamy, A.; Torii, R.; Moon, J.; Koh, T.; Crake, T.; Parker, M.; et al. A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images. Int. J. Cardiovasc. Imaging 2021, 37, 1825–1837.
  3. Hughes, J.; Yuan, N.; He, B.; Ouyang, J.; Ebinger, J.; Botting, P.; Lee, J.; Theurer, J.; Tooley, J.; Nieman, K.; et al. Deep learning evaluation of biomarkers from echocardiogram videos. EBioMedicine 2021, 73, 103613.
  4. Kagiyama, N.; Shrestha, S.; Cho, J.; Khalil, M.; Singh, Y.; Challa, A.; Casaclang-Verzosa, G.; Sengupta, P. A low-cost texture-based pipeline for predicting myocardial tissue remodeling and fibrosis using cardiac ultrasound: Texture-based myocardial tissue characterization using cardiac ultrasound. EBioMedicine 2020, 54, 102726.
  5. Yu, C.J.; Yeh, H.J.; Chang, C.C.; Tang, J.H.; Kao, W.Y.; Chen, W.C.; Huang, Y.J.; Li, C.H.; Chang, W.H.; Lin, Y.T.; et al. Lightweight deep neural networks for cholelithiasis and cholecystitis detection by point-of-care ultrasound. Comput. Methods Programs Biomed. 2021, 211, 106382.
  6. Maraci, M.; Yaqub, M.; Craik, R.; Beriwal, S.; Self, A.; Von Dadelszen, P.; Papageorghiou, A.; Noble, J. Toward point-of-care ultrasound estimation of fetal gestational age from the trans-cerebellar diameter using CNN-based ultrasound image analysis. J. Med. Imaging 2020, 7, 014501.
  7. Wang, X.; Liu, Z.; Du, Y.; Diao, Y.; Liu, P.; Lv, G.; Zhang, H. Recognition of Fetal Facial Ultrasound Standard Plane Based on Texture Feature Fusion. Comput. Math. Methods Med. 2021, 2021, 6656942.
  8. Shang, S.; Huang, C.; Yan, W.; Chen, R.; Cao, J.; Zhang, Y.; Guo, Y.; Du, G. Performance of a computer aided diagnosis system for SARS-CoV-2 pneumonia based on ultrasound images. Eur. J. Radiol. 2022, 146, 110066.
  9. Li, J.; Bu, Y.; Lu, S.; Pang, H.; Luo, C.; Liu, Y.; Qian, L. Development of a Deep Learning–Based Model for Diagnosing Breast Nodules With Ultrasound. J. Ultrasound Med. 2021, 40, 513–520.
  10. Wang, S.; Niu, S.; Qu, E.; Forsberg, F.; Wilkes, A.; Sevrukov, A.; Nam, K.; Mattrey, R.; Ojeda-Fournier, H.; Eisenbrey, J. Characterization of indeterminate breast lesions on B-mode ultrasound using automated machine learning models. J. Med. Imaging 2020, 7, 057002.
  11. Gu, P.; Lee, W.M.; Roubidoux, M.; Yuan, J.; Wang, X.; Carson, P. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation. Ultrasonics 2016, 65, 51–58.
  12. Badawy, S.; Mohamed, A.N.; Hefnawy, A.; Zidan, H.; GadAllah, M.; El-Banby, G. Automatic semantic segmentation of breast tumors in ultrasound images based on combining fuzzy logic and deep learning—A feasibility study. PLoS ONE 2021, 16, e0251899.
  13. Hu, Y.; Guo, Y.; Wang, Y.; Yu, J.; Li, J.; Zhou, S.; Chang, C. Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model. Med. Phys. 2019, 46, 215–228.
  14. Huang, K.; Zhang, Y.; Cheng, H.; Xing, P.; Zhang, B. Semantic segmentation of breast ultrasound image with fuzzy deep learning network and breast anatomy constraints. Neurocomputing 2021, 450, 319–335.
  15. Ilesanmi, A.; Chaumrattanakul, U.; Makhanov, S. A method for segmentation of tumors in breast ultrasound images using the variant enhanced deep learning. Biocybern. Biomed. Eng. 2021, 41, 802–818.
  16. Liao, W.X.; He, P.; Hao, J.; Wang, X.Y.; Yang, R.L.; An, D.; Cui, L.G. Automatic Identification of Breast Ultrasound Image Based on Supervised Block-Based Region Segmentation Algorithm and Features Combination Migration Deep Learning Model. IEEE J. Biomed. Health Inform. 2020, 24, 984–993.
  17. Pourasad, Y.; Zarouri, E.; Parizi, M.; Mohammed, A. Presentation of novel architecture for diagnosis and identifying breast cancer location based on ultrasound images using machine learning. Diagnostics 2021, 11, 1870.
  18. Vakanski, A.; Xian, M.; Freer, P. Attention-Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images. Ultrasound Med. Biol. 2020, 46, 2819–2833.
  19. Webb, J.; Adusei, S.; Wang, Y.; Samreen, N.; Adler, K.; Meixner, D.; Fazzio, R.; Fatemi, M.; Alizad, A. Comparing deep learning-based automatic segmentation of breast masses to expert interobserver variability in ultrasound imaging. Comput. Biol. Med. 2021, 139, 104966.
  20. Xu, Y.; Wang, Y.; Yuan, J.; Cheng, Q.; Wang, X.; Carson, P. Medical breast ultrasound image segmentation by machine learning. Ultrasonics 2019, 91, 1–9.
  21. Yap, M.; Goyal, M.; Osman, F.; Martí, R.; Denton, E.; Juette, A.; Zwiggelaar, R. Breast ultrasound lesions recognition: End-to-end deep learning approaches. J. Med. Imaging 2019, 6, 011007.
  22. Han, L.; Huang, Y.; Dou, H.; Wang, S.; Ahamad, S.; Luo, H.; Liu, Q.; Fan, J.; Zhang, J. Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network. Comput. Methods Programs Biomed. 2020, 189, 105275.
  23. Shia, W.C.; Lin, L.S.; Chen, D.R. Classification of malignant tumours in breast ultrasound using unsupervised machine learning approaches. Sci. Rep. 2021, 11, 1418.
  24. Gonzelez-Luna, F.; Hernandez-Lopez, J.; Gomez-Flores, W. A performance evaluation of machine learning techniques for breast ultrasound classification. In Proceedings of the 2019 16th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE 2019), Mexico City, Mexico, 11–13 September 2019.
  25. Fleury, E.; Marcomini, K. Performance of machine learning software to classify breast lesions using BI-RADS radiomic features on ultrasound images. Eur. Radiol. Exp. 2019, 3, 34.
  26. Destrempes, F.; Trop, I.; Allard, L.; Chayer, B.; Khoury, M.; Lalonde, L.; Cloutier, G. BI-RADS assessment of solid breast lesions based on quantitative ultrasound and machine learning. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Glasgow, UK, 6–9 October 2019; Volume 2019, pp. 1909–1911.
  27. Mishra, A.; Roy, P.; Bandyopadhyay, S.; Das, S. Breast ultrasound tumour classification: A Machine Learning—Radiomics based approach. Expert Syst. 2021, 38, e12713.
  28. Romeo, V.; Cuocolo, R.; Apolito, R.; Stanzione, A.; Ventimiglia, A.; Vitale, A.; Verde, F.; Accurso, A.; Amitrano, M.; Insabato, L.; et al. Clinical value of radiomics and machine learning in breast ultrasound: A multicenter study for differential diagnosis of benign and malignant lesions. Eur. Radiol. 2021, 31, 9511–9519.
  29. Zhang, Z.; Li, Y.; Wu, W.; Chen, H.; Cheng, L.; Wang, S. Tumor detection using deep learning method in automated breast ultrasound. Biomed. Signal Process. Control 2021, 68, 102677.
  30. Shin, S.; Lee, S.; Yun, I.; Kim, S.; Lee, K. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Trans. Med. Imaging 2019, 38, 762–774.
  31. Tanaka, H.; Chiu, S.W.; Watanabe, T.; Kaoku, S.; Yamaguchi, T. Computer-aided diagnosis system for breast ultrasound images using deep learning. Phys. Med. Biol. 2019, 64, 235013.
  32. Wu, T.; Sultan, L.; Tian, J.; Cary, T.; Sehgal, C. Machine learning for diagnostic ultrasound of triple-negative breast cancer. Breast Cancer Res. Treat. 2019, 173, 365–373.
  33. Zhu, Y.C.; AlZoubi, A.; Jassim, S.; Jiang, Q.; Zhang, Y.; Wang, Y.B.; Ye, X.D.; DU, H. A generic deep learning framework to classify thyroid and breast lesions in ultrasound images. Ultrasonics 2021, 110, 106300.
  34. Chen, C.; Wang, Y.; Niu, J.; Liu, X.; Li, Q.; Gong, X. Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos. IEEE Trans. Med. Imaging 2021, 40, 2439–2451.
  35. Marcon, M.; Ciritsis, A.; Rossi, C.; Becker, A.; Berger, N.; Wurnig, M.; Wagner, M.; Frauenfelder, T.; Boss, A. Diagnostic performance of machine learning applied to texture analysis-derived features for breast lesion characterisation at automated breast ultrasound: A pilot study. Eur. Radiol. Exp. 2019, 3, 44.
  36. Wan, K.; Wong, C.; Ip, H.; Fan, D.; Yuen, P.; Fong, H.; Ying, M. Evaluation of the performance of traditional machine learning algorithms, convolutional neural network and AutoML Vision in ultrasound breast lesions classification: A comparative study. Quant. Imaging Med. Surg. 2021, 11, 1381–1393.
  37. Al-Dhabyani, W.; Fahmy, A.; Gomaa, M.; Khaled, H. Deep learning approaches for data augmentation and classification of breast masses using ultrasound images. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 618–627.
  38. Huo, L.; Tan, Y.; Wang, S.; Geng, C.; Li, Y.; Ma, X.; Wang, B.; He, Y.; Yao, C.; Ouyang, T. Machine learning models to improve the differentiation between benign and malignant breast lesions on ultrasound: A multicenter external validation study. Cancer Manag. Res. 2021, 13, 3367–3379.
  39. Ciritsis, A.; Rossi, C.; Eberhard, M.; Marcon, M.; Becker, A.; Boss, A. Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making. Eur. Radiol. 2019, 29, 5458–5468.
  40. Bajaj, R.; Huang, X.; Kilic, Y.; Jain, A.; Ramasamy, A.; Torii, R.; Moon, J.; Koh, T.; Crake, T.; Parker, M.; et al. A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images. Int. J. Cardiovasc. Imaging 2021, 37, 1825–1837.
  41. Jana, B.; Oswal, K.; Mitra, S.; Saha, G.; Banerjee, S. Detection of peripheral arterial disease using Doppler spectrogram based expert system for Point-of-Care applications. Biomed. Signal Process. Control 2019, 54, 101599.
  42. Sakar, B.; Serbes, G.; Aydin, N. Emboli detection using a wrapper-based feature selection algorithm with multiple classifiers. Biomed. Signal Process. Control 2022, 71, 103080.
  43. Sofian, H.; Than, J.; Mohammad, S.; Noor, N. Calcification detection of coronary artery disease in intravascular ultrasound image: Deep feature learning approach. Int. J. Integr. Eng. 2018, 10, 43–57.
  44. Sofian, H.; Than, J.; Mohamad, S.; Noor, N. Calcification detection for intravascular ultrasound image using direct acyclic graph architecture: Pre-Trained model for 1-channel image. Indones. J. Electr. Eng. Comput. Sci. 2021, 22, 787–794.
  45. Sofian, H.; Ming, J.; Muhammad, S.; Noor, N. Calcification detection using convolutional neural network architectures in intravascular ultrasound images. Indones. J. Electr. Eng. Comput. Sci. 2019, 17, 1313–1321.
  46. Willemink, M.; Varga-Szemes, A.; Schoepf, U.; Codari, M.; Nieman, K.; Fleischmann, D.; Mastrodicasa, D. Emerging methods for the characterization of ischemic heart disease: Ultrafast Doppler angiography, micro-CT, photon-counting CT, novel MRI and PET techniques, and artificial intelligence. Eur. Radiol. Exp. 2021, 5, 12.
  47. Cui, H.; Xia, Y.; Zhang, Y. Supervised machine learning for coronary artery lumen segmentation in intravascular ultrasound images. Int. J. Numer. Methods Biomed. Eng. 2020, 36, e3348.
  48. Zhang, C.; Guo, X.; Guo, X.; Molony, D.; Li, H.; Samady, H.; Giddens, D.; Athanasiou, L.; Tang, D.; Nie, R.; et al. Machine learning model comparison for automatic segmentation of intracoronary optical coherence tomography and plaque cap thickness quantification. CMES—Comput. Model. Eng. Sci. 2020, 123, 631–646.
  49. Bajaj, R.; Huang, X.; Kilic, Y.; Ramasamy, A.; Jain, A.; Ozkor, M.; Tufaro, V.; Safi, H.; Erdogan, E.; Serruys, P.; et al. Advanced deep learning methodology for accurate, real-time segmentation of high-resolution intravascular ultrasound images. Int. J. Cardiol. 2021, 339, 185–191.
  50. Blanco, P.; Ziemer, P.; Bulant, C.; Ueki, Y.; Bass, R.; Räber, L.; Lemos, P.; García-García, H. Fully automated lumen and vessel contour segmentation in intravascular ultrasound datasets. Med. Image Anal. 2022, 75, 102262.
  51. Zhou, R.; Guo, F.; Azarpazhooh, M.; Hashemi, S.; Cheng, X.; Spence, J.; Ding, M.; Fenster, A. Deep Learning-Based Measurement of Total Plaque Area in B-Mode Ultrasound Images. IEEE J. Biomed. Health Inform. 2021, 25, 2967–2977.
  52. Lee, J.G.; Ko, J.; Hae, H.; Kang, S.J.; Kang, D.Y.; Lee, P.; Ahn, J.M.; Park, D.W.; Lee, S.W.; Kim, Y.H.; et al. Intravascular ultrasound-based machine learning for predicting fractional flow reserve in intermediate coronary artery lesions. Atherosclerosis 2020, 292, 171–177.
  53. Guvenir Torun, S.; Torun, H.; Hansen, H.; Gandini, G.; Berselli, I.; Codazzi, V.; de Korte, C.; van der Steen, A.; Migliavacca, F.; Chiastra, C.; et al. Multicomponent Mechanical Characterization of Atherosclerotic Human Coronary Arteries: An Experimental and Computational Hybrid Approach. Front. Physiol. 2021, 12, 1480.
  54. Boyd, C.; Brown, G.; Kleinig, T.; Dawson, J.; McDonnell, M.; Jenkinson, M.; Bezak, E. Machine learning quantitation of cardiovascular and cerebrovascular disease: A systematic review of clinical applications. Diagnostics 2021, 11, 551.
  55. Savaş, S.; Topaloglu, N.; Kazcı, O.; Koşar, P. Classification of Carotid Artery Intima Media Thickness Ultrasound Images with Deep Learning. J. Med. Syst. 2019, 43, 273.
  56. Skandha, S.; Gupta, S.; Saba, L.; Koppula, V.; Johri, A.; Khanna, N.; Mavrogeni, S.; Laird, J.; Pareek, G.; Miner, M.; et al. 3-D optimized classification and characterization artificial intelligence paradigm for cardiovascular/stroke risk stratification using carotid ultrasound-based delineated plaque: Atheromatic™ 2.0. Comput. Biol. Med. 2020, 125, 103958.
  57. Hsu, K.C.; Lin, C.H.; Johnson, K.; Liu, C.H.; Chang, T.Y.; Huang, K.L.; Fann, Y.C.; Lee, T.H. Autodetect extracranial and intracranial artery stenosis by machine learning using ultrasound. Comput. Biol. Med. 2020, 116, 103569.
  58. Saba, L.; Sanagala, S.; Gupta, S.; Koppula, V.; Laird, J.; Viswanathan, V.; Sanches, M.; Kitas, G.; Johri, A.; Sharma, N.; et al. A Multicenter Study on Carotid Ultrasound Plaque Tissue Characterization and Classification Using Six Deep Artificial Intelligence Models: A Stroke Application. IEEE Trans. Instrum. Meas. 2021, 70, 1–12.
  59. Luo, X.; Ara, L.; Ding, H.; Rollins, D.; Motaganahalli, R.; Sawchuk, A. Computational methods to automate the initial interpretation of lower extremity arterial Doppler and duplex carotid ultrasound studies. J. Vasc. Surg. 2021, 74, 988–996.e1.
  60. Klingensmith, J.; Haggard, A.; Ralston, J.; Qiang, B.; Fedewa, R.; Elsharkawy, H.; Geoffrey Vince, D. Tissue classification in intercostal and paravertebral ultrasound using spectral analysis of radiofrequency backscatter. J. Med. Imaging 2019, 6, 047001.
  61. Khanna, N.; Jamthikar, A.; Gupta, D.; Piga, M.; Saba, L.; Carcassi, C.; Giannopoulos, A.; Nicolaides, A.; Laird, J.; Suri, H.; et al. Rheumatoid Arthritis: Atherosclerosis Imaging and Cardiovascular Risk Assessment Using Machine and Deep Learning–Based Tissue Characterization. Curr. Atheroscler. Rep. 2019, 21, 7.
  62. Jamthikar, A.; Gupta, D.; Khanna, N.; Saba, L.; Araki, T.; Viskovic, K.; Suri, H.; Gupta, A.; Mavrogeni, S.; Turk, M.; et al. A low-cost machine learning-based cardiovascular/stroke risk assessment system: Integration of conventional factors with image phenotypes. Cardiovasc. Diagn. Ther. 2019, 9, 420–430.
  63. Guo, X.; Maehara, A.; Matsumura, M.; Wang, L.; Zheng, J.; Samady, H.; Mintz, G.; Giddens, D.; Tang, D. Predicting plaque vulnerability change using intravascular ultrasound + optical coherence tomography image-based fluid–structure interaction models and machine learning methods with patient follow-up data: A feasibility study. BioMedical Eng. Online 2021, 20, 34.
  64. Gudigar, A.; Nayak, S.; Samanth, J.; Raghavendra, U.; Ashwal, A.; Barua, P.; Hasan, M.; Ciaccio, E.; Tan, R.S.; Rajendra Acharya, U. Recent trends in artificial intelligence-assisted coronary atherosclerotic plaque characterization. Int. J. Environ. Res. Public Health 2021, 18, 10003.
  65. Golemati, S.; Patelaki, E.; Gastounioti, A.; Andreadis, I.; Liapis, C.; Nikita, K. Motion synchronisation patterns of the carotid atheromatous plaque from B-mode ultrasound. Sci. Rep. 2020, 10, 11221.
  66. Coelewij, L.; Waddington, K.; Robinson, G.; Chocano, E.; McDonnell, T.; Farinha, F.; Peng, J.; Dönnes, P.; Smith, E.; Croca, S.; et al. Serum Metabolomic Signatures Can Predict Subclinical Atherosclerosis in Patients with Systemic Lupus Erythematosus. Arterioscler. Thromb. Vasc. Biol. 2021, 41, 1446–1458.
  67. Lo Vercio, L.; del Fresno, M.; Larrabide, I. Lumen-intima and media-adventitia segmentation in IVUS images using supervised classifications of arterial layers and morphological structures. Comput. Methods Programs Biomed. 2019, 177, 113–121.
  68. Penatti, O.; Werneck, R.; de Almeida, W.; Stein, B.; Pazinato, D.; Mendes Júnior, P.; Torres, R.; Rocha, A. Mid-level image representations for real-time heart view plane classification of echocardiograms. Comput. Biol. Med. 2015, 66, 66–81.
  69. Hughes, J.; Yuan, N.; He, B.; Ouyang, J.; Ebinger, J.; Botting, P.; Lee, J.; Theurer, J.; Tooley, J.; Nieman, K.; et al. Deep learning evaluation of biomarkers from echocardiogram videos. EBioMedicine 2021, 73, 103613.
  70. Kagiyama, N.; Shrestha, S.; Cho, J.; Khalil, M.; Singh, Y.; Challa, A.; Casaclang-Verzosa, G.; Sengupta, P. A low-cost texture-based pipeline for predicting myocardial tissue remodeling and fibrosis using cardiac ultrasound: Texture-based myocardial tissue characterization using cardiac ultrasound. EBioMedicine 2020, 54, 102726.
  71. Sulas, E.; Urru, M.; Tumbarello, R.; Raffo, L.; Pani, D. Automatic detection of complete and measurable cardiac cycles in antenatal pulsed-wave Doppler signals. Comput. Methods Programs Biomed. 2020, 190, 105336.
  72. Farahani, N.; Enayati, M.; Sundaram, D.; Damani, D.; Kaggal, V.; Zacher, A.; Geske, J.; Kane, G.; Arunachalam, S.; Pasupathy, K.; et al. Application of machine learning for detection of hypertrophic cardiomyopathy patients from echocardiogram measurements. In Proceedings of the 2021 Design of Medical Devices Conference (DMD 2021), Minneapolis, MN, USA, 12–15 April 2021.
  73. Hur, D.; Sugeng, L. Non-invasive Multimodality Cardiovascular Imaging of the Right Heart and Pulmonary Circulation in Pulmonary Hypertension. Front. Cardiovasc. Med. 2019, 6, 24.
  74. Vaseli, H.; Liao, Z.; Abdi, A.; Girgis, H.; Behnami, D.; Luong, C.; Dezaki, F.; Dhungel, N.; Rohling, R.; Gin, K.; et al. Designing lightweight deep learning models for echocardiography view classification. In Progress in Biomedical Optics and Imaging; Proceedings of SPIE; SPIE: Philadelphia, PA, USA, 2019; Volume 10951.
  75. Puyol-Anton, E.; Ruijsink, B.; Gerber, B.; Amzulescu, M.; Langet, H.; De Craene, M.; Schnabel, J.; Piro, P.; King, A. Regional Multi-View Learning for Cardiac Motion Analysis: Application to Identification of Dilated Cardiomyopathy Patients. IEEE Trans. Biomed. Eng. 2019, 66, 956–966.
  76. Yu, C.J.; Yeh, H.J.; Chang, C.C.; Tang, J.H.; Kao, W.Y.; Chen, W.C.; Huang, Y.J.; Li, C.H.; Chang, W.H.; Lin, Y.T.; et al. Lightweight deep neural networks for cholelithiasis and cholecystitis detection by point-of-care ultrasound. Comput. Methods Programs Biomed. 2021, 211, 106382.
  77. Xi, P.; Guan, H.; Shu, C.; Borgeat, L.; Goubran, R. An integrated approach for medical abnormality detection using deep patch convolutional neural networks. Vis. Comput. 2020, 36, 1869–1882.
  78. Mahalingam, D.; Chelis, L.; Nizamuddin, I.; Lee, S.; Kakolyris, S.; Halff, G.; Washburn, K.; Attwood, K.; Fahad, I.; Grigorieva, J.; et al. Detection of hepatocellular carcinoma in a high-risk population by a mass spectrometry-based test. Cancers 2021, 13, 3109.
  79. Brehar, R.; Mitrea, D.A.; Vancea, F.; Marita, T.; Nedevschi, S.; Lupsor-Platon, M.; Rotaru, M.; Badea, R. Comparison of deep-learning and conventional machine-learning methods for the automatic recognition of the hepatocellular carcinoma areas from ultrasound images. Sensors 2020, 20, 3085.
  80. Schmauch, B.; Herent, P.; Jehanno, P.; Dehaene, O.; Saillard, C.; Aubé, C.; Luciani, A.; Lassau, N.; Jégou, S. Diagnosis of focal liver lesions from ultrasound using deep learning. Diagn. Interv. Imaging 2019, 100, 227–233.
  81. Zamanian, H.; Mostaar, A.; Azadeh, P.; Ahmadi, M. Implementation of combinational deep learning algorithm for non-alcoholic fatty liver classification in ultrasound images. J. Biomed. Phys. Eng. 2021, 11, 73–84.
  82. Wang, W.; Zhang, J.C.; Tian, W.S.; Chen, L.D.; Zheng, Q.; Hu, H.T.; Wu, S.S.; Guo, Y.; Xie, X.Y.; Lu, M.D.; et al. Shear wave elastography-based ultrasomics: Differentiating malignant from benign focal liver lesions. Abdom. Radiol. 2021, 46, 237–248.
  83. Peng, J.; Peng, Y.; Lin, P.; Wan, D.; Qin, H.; Li, X.; Wang, X.; He, Y.; Yang, H. Differentiating infected focal liver lesions from malignant mimickers: Value of ultrasound-based radiomics. Clin. Radiol. 2022, 77, 104–113.
  84. Li, W.; Lv, X.Z.; Zheng, X.; Ruan, S.M.; Hu, H.T.; Chen, L.D.; Huang, Y.; Li, X.; Zhang, C.Q.; Xie, X.Y.; et al. Machine Learning-Based Ultrasomics Improves the Diagnostic Performance in Differentiating Focal Nodular Hyperplasia and Atypical Hepatocellular Carcinoma. Front. Oncol. 2021, 11, 863.
  85. Brattain, L.; Ozturk, A.; Telfer, B.; Dhyani, M.; Grajo, J.; Samir, A. Image Processing Pipeline for Liver Fibrosis Classification Using Ultrasound Shear Wave Elastography. Ultrasound Med. Biol. 2020, 46, 2667–2676.
  86. Byra, M.; Styczynski, G.; Szmigielski, C.; Kalinowski, P.; Michalowski, L.; Paluszkiewicz, R.; Ziarkiewicz-Wróblewska, B.; Zieniewicz, K.; Sobieraj, P.; Nowicki, A. Transfer learning with deep convolutional neural network for liver steatosis assessment in ultrasound images. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1895–1903.
  87. Che, H.; Brown, L.; Foran, D.; Nosher, J.; Hacihaliloglu, I. Liver disease classification from ultrasound using multi-scale CNN. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1537–1548.
  88. Chou, T.H.; Yeh, H.J.; Chang, C.C.; Tang, J.H.; Kao, W.Y.; Su, I.C.; Li, C.H.; Chang, W.H.; Huang, C.K.; Sufriyana, H.; et al. Deep learning for abdominal ultrasound: A computer-aided diagnostic system for the severity of fatty liver. J. Chin. Med. Assoc. JCMA 2021, 84, 842–850.
  89. Kim, T.; Lee, D.; Park, E.K.; Choi, S. Deep learning techniques for fatty liver using multi-view ultrasound images scanned by different scanners:development and validation study. JMIR Med. Inform. 2021, 9, e30066.
  90. Mitrea, D.; Badea, R.; Mitrea, P.; Brad, S.; Nedevschi, S. Hepatocellular carcinoma automatic diagnosis within ceus and b-mode ultrasound images using advanced machine learning methods. Sensors 2021, 21, 2202.
  91. Neogi, N.; Adhikari, A.; Roy, M. Use of a novel set of features based on texture anisotropy for identification of liver steatosis from ultrasound images: A simple method. Multimed. Tools Appl. 2019, 78, 11105–11127.
  92. Zhang, H.; Guo, L.; Wang, D.; Wang, J.; Bao, L.; Ying, S.; Xu, H.; Shi, J. Multi-Source Transfer Learning Via Multi-Kernel Support Vector Machine plus for B-Mode Ultrasound-Based Computer-Aided Diagnosis of Liver Cancers. IEEE J. Biomed. Health Inform. 2021, 25, 3874–3885.
  93. Yang, Q.; Wei, J.; Hao, X.; Kong, D.; Yu, X.; Jiang, T.; Xi, J.; Cai, W.; Luo, Y.; Jing, X.; et al. Improving B-mode ultrasound diagnostic performance for focal liver lesions using deep learning: A multicentre study. EBioMedicine 2020, 56, 102777.
  94. Cha, D.; Kang, T.; Min, J.; Joo, I.; Sinn, D.; Ha, S.; Kim, K.; Lee, G.; Yi, J. Deep learning-based automated quantification of the hepatorenal index for evaluation of fatty liver by ultrasonography. Ultrasonography 2021, 40, 565–574.
  95. Donald, I.; Macvicar, J.; Brown, T. Investigation of Abdominal Masses by pulsed ultrasound. Lancet 1958, 271, 1188–1195.
  96. Maraci, M.; Yaqub, M.; Craik, R.; Beriwal, S.; Self, A.; Von Dadelszen, P.; Papageorghiou, A.; Noble, J. Toward point-of-care ultrasound estimation of fetal gestational age from the trans-cerebellar diameter using CNN-based ultrasound image analysis. J. Med. Imaging 2020, 7, 014501.
  97. Wang, X.; Liu, Z.; Du, Y.; Diao, Y.; Liu, P.; Lv, G.; Zhang, H. Recognition of Fetal Facial Ultrasound Standard Plane Based on Texture Feature Fusion. Comput. Math. Methods Med. 2021, 2021, 6656942.
  98. Gudigar, A.; Samanth, J.; Raghavendra, U.; Dharmik, C.; Vasudeva, A.; Padmakumar, R.; Tan, R.S.; Ciaccio, E.; Molinari, F.; Rajendra Acharya, U. Local Preserving Class Separation Framework to Identify Gestational Diabetes Mellitus Mother Using Ultrasound Fetal Cardiac Image. IEEE Access 2020, 8, 229043–229051.
  99. Kim, H.; Lee, S.; Kwon, J.Y.; Park, Y.; Kim, K.; Seo, J. Automatic evaluation of fetal head biometry from ultrasound images using machine learning. Physiol. Meas. 2019, 40, 065009.
  100. Liu, S.; Sun, Y.; Luo, N. Doppler Ultrasound Imaging Combined with Fetal Heart Detection in Predicting Fetal Distress in Pregnancy-Induced Hypertension under the Guidance of Artificial Intelligence Algorithm. J. Healthc. Eng. 2021, 2021, 4405189.
  101. Qu, R.; Xu, G.; Ding, C.; Jia, W.; Sun, M. Deep Learning-Based Methodology for Recognition of Fetal Brain Standard Scan Planes in 2D Ultrasound Images. IEEE Access 2020, 8, 44443–44451.
  102. Sahli, H.; Mouelhi, A.; Ben Slama, A.; Sayadi, M.; Rachdi, R. Supervised classification approach of biometric measures for automatic fetal defect screening in head ultrasound images. J. Med. Eng. Technol. 2019, 43, 279–286.
  103. Zhu, F.; Liu, M.; Wang, F.; Qiu, D.; Li, R.; Dai, C. Automatic measurement of fetal femur length in ultrasound images: A comparison of random forest regression model and SegNet. Math. Biosci. Eng. 2021, 18, 7790–7805.
  104. Dozen, A.; Komatsu, M.; Sakai, A.; Komatsu, R.; Shozu, K.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Image segmentation of the ventricular septum in fetal cardiac ultrasound videos based on deep learning using time-series information. Biomolecules 2020, 10, 1526.
  105. Shozu, K.; Komatsu, M.; Sakai, A.; Komatsu, R.; Dozen, A.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Model-agnostic method for thoracic wall segmentation in fetal ultrasound videos. Biomolecules 2020, 10, 1691.
  106. Perez-Gonzalez, J.; Arámbula Cosío, F.; Huegel, J.; Medina-Bañuelos, V. Probabilistic Learning Coherent Point Drift for 3D Ultrasound Fetal Head Registration. Comput. Math. Methods Med. 2020, 2020, 4271519.
  107. Rasheed, K.; Junejo, F.; Malik, A.; Saqib, M. Automated Fetal Head Classification and Segmentation Using Ultrasound Video. IEEE Access 2021, 9, 160249–160267.
  108. Torrents-Barrena, J.; Monill, N.; Piella, G.; Gratacós, E.; Eixarch, E.; Ceresa, M.; González Ballester, M. Assessment of Radiomics and Deep Learning for the Segmentation of Fetal and Maternal Anatomy in Magnetic Resonance Imaging and Ultrasound. Acad. Radiol. 2021, 28, 173–188.
  109. Xia, T.H.; Tan, M.; Li, J.H.; Wang, J.J.; Wu, Q.Q.; Kong, D.X. Establish a normal fetal lung gestational age grading model and explore the potential value of deep learning algorithms in fetal lung maturity evaluation. Chin. Med. J. 2021, 134, 1828–1837.
  110. Crockart, I.; Brink, L.; du Plessis, C.; Odendaal, H. Classification of intrauterine growth restriction at 34–38 weeks gestation with machine learning models. Inform. Med. Unlocked 2021, 23, 100533.
  111. Feng, M.; Wan, L.; Li, Z.; Qing, L.; Qi, X. Fetal Weight Estimation via Ultrasound Using Machine Learning. IEEE Access 2019, 7, 87783–87791.
  112. Meng, Q.; Matthew, J.; Zimmer, V.; Gomez, A.; Lloyd, D.; Rueckert, D.; Kainz, B. Mutual Information-Based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains: Application to Fetal Ultrasound Imaging. IEEE Trans. Med. Imaging 2021, 40, 722–734.
  113. Miyagi, Y.; Hata, T.; Bouno, S.; Koyanagi, A.; Miyake, T. Recognition of fetal facial expressions using artificial intelligence deep learning. Donald Sch. J. Ultrasound Obstet. Gynecol. 2021, 15, 223–228.
  114. Miyagi, Y.; Hata, T.; Bouno, S.; Koyanagi, A.; Miyake, T. Recognition of facial expression of fetuses by artificial intelligence (AI). J. Perinat. Med. 2021, 49, 596–603.
  115. Sridar, P.; Kumar, A.; Quinton, A.; Nanan, R.; Kim, J.; Krishnakumar, R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. Ultrasound Med. Biol. 2019, 45, 1259–1273.
  116. Chen, C.H.; Lee, Y.W.; Huang, Y.S.; Lan, W.R.; Chang, R.F.; Tu, C.Y.; Chen, C.Y.; Liao, W.C. Computer-aided diagnosis of endobronchial ultrasound images using convolutional neural network. Comput. Methods Programs Biomed. 2019, 177, 175–182.
  117. Chang, Y.; Lafata, K.; Segars, W.; Yin, F.F.; Ren, L. Development of realistic multi-contrast textured XCAT (MT-XCAT) phantoms using a dual-discriminator conditional-generative adversarial network (D-CGAN). Phys. Med. Biol. 2020, 65, 065009.
  118. Zhou, B.; Bartholmai, B.; Kalra, S.; Zhang, X. Predicting lung mass density of patients with interstitial lung disease and healthy subjects using deep neural network and lung ultrasound surface wave elastography. J. Mech. Behav. Biomed. Mater. 2020, 104, 103682.
  119. Tomlinson, G.; Thomas, N.; Chain, B.; Best, K.; Simpson, N.; Hardavella, G.; Brown, J.; Bhowmik, A.; Navani, N.; Janes, S.; et al. Transcriptional profiling of endobronchial ultrasound-guided lymph node samples aids diagnosis of mediastinal lymphadenopathy. Chest 2016, 149, 535–544.
  120. Silva, S.; Ait Aissa, D.; Cocquet, P.; Hoarau, L.; Ruiz, J.; Ferre, F.; Rousset, D.; Mora, M.; Mari, A.; Fourcade, O.; et al. Combined Thoracic Ultrasound Assessment during a Successful Weaning Trial Predicts Postextubation Distress. Anesthesiology 2017, 127, 666–674.
  121. Wang, X.; Burzynski, J.; Hamilton, J.; Rao, P.; Weitzel, W.; Bull, J. Quantifying lung ultrasound comets with a convolutional neural network: Initial clinical results. Comput. Biol. Med. 2019, 107, 39–46.
  122. Xu, Y.; Zhang, Y.; Bi, K.; Ning, Z.; Xu, L.; Shen, M.; Deng, G.; Wang, Y. Boundary Restored Network for Subpleural Pulmonary Lesion Segmentation on Ultrasound Images at Local and Global Scales. J. Digit. Imaging 2020, 33, 1155–1166.
  123. Shang, S.; Huang, C.; Yan, W.; Chen, R.; Cao, J.; Zhang, Y.; Guo, Y.; Du, G. Performance of a computer aided diagnosis system for SARS-CoV-2 pneumonia based on ultrasound images. Eur. J. Radiol. 2022, 146, 110066.
  124. Suri, J.; Agarwal, S.; Gupta, S.; Puvvula, A.; Biswas, M.; Saba, L.; Bit, A.; Tandel, G.; Agarwal, M.; Patrick, A.; et al. A narrative review on characterization of acute respiratory distress syndrome in COVID-19-infected lungs using artificial intelligence. Comput. Biol. Med. 2021, 130, 104210.
  125. Born, J.; Beymer, D.; Rajan, D.; Coy, A.; Mukherjee, V.; Manica, M.; Prasanna, P.; Ballah, D.; Guindy, M.; Shaham, D.; et al. On the role of artificial intelligence in medical imaging of COVID-19. Patterns 2021, 2, 100269.
  126. Alhasan, M.; Hasaneen, M. Digital imaging, technologies and artificial intelligence applications during COVID-19 pandemic. Comput. Med. Imaging Graph. 2021, 91, 101933.
  127. Li, W.; Deng, X.; Shao, H.; Wang, X. Deep learning applications for COVID-19 analysis: A state-of-the-art survey. CMES—Comput. Model. Eng. Sci. 2021, 129, 65–98.
  128. McDermott, C.; Łącki, M.; Sainsbury, B.; Henry, J.; Filippov, M.; Rossa, C. Sonographic Diagnosis of COVID-19: A Review of Image Processing for Lung Ultrasound. Front. Big Data 2021, 4, 612561.
  129. Xue, W.; Cao, C.; Liu, J.; Duan, Y.; Cao, H.; Wang, J.; Tao, X.; Chen, Z.; Wu, M.; Zhang, J.; et al. Modality alignment contrastive learning for severity assessment of COVID-19 from lung ultrasound and clinical information. Med. Image Anal. 2021, 69, 101975.
  130. Tsai, C.H.; van der Burgt, J.; Vukovic, D.; Kaur, N.; Demi, L.; Canty, D.; Wang, A.; Royse, A.; Royse, C.; Haji, K.; et al. Automatic deep learning-based pleural effusion classification in lung ultrasound images for respiratory pathology diagnosis. Phys. Medica 2021, 83, 38–45.
  131. Kallel, A.; Rekik, M.; Khemakhem, M. Hybrid-based framework for COVID-19 prediction via federated machine learning models. J. Supercomput. 2021, 78, 7078–7105.
  132. Cossio, M.; Gilardino, R. Would the Use of Artificial Intelligence in COVID-19 Patient Management Add Value to the Healthcare System? Front. Med. 2021, 8, 34.
  133. Chandra, G.; Challa, M. AE-CNN Based Supervised Image Classification. Commun. Comput. Inf. Sci. 2021, 1378 CCIS, 434–442.
  134. Girum, K.; Lalande, A.; Hussain, R.; Créhange, G. A deep learning method for real-time intraoperative US image segmentation in prostate brachytherapy. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1467–1476.
  135. Karimi, D.; Zeng, Q.; Mathur, P.; Avinash, A.; Mahdavi, S.; Spadinger, I.; Abolmaesumi, P.; Salcudean, S. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Med. Image Anal. 2019, 57, 186–196.
  136. Lei, Y.; Tian, S.; He, X.; Wang, T.; Wang, B.; Patel, P.; Jani, A.; Mao, H.; Curran, W.; Liu, T.; et al. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med. Phys. 2019, 46, 3194–3206.
  137. Ouahabi, A.; Taleb-Ahmed, A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging. Pattern Recognit. Lett. 2021, 144, 27–34.
  138. Poudel, P.; Illanes, A.; Ataide, E.; Esmaeili, N.; Balakrishnan, S.; Friebe, M. Thyroid Ultrasound Texture Classification Using Autoregressive Features in Conjunction with Machine Learning Approaches. IEEE Access 2019, 7, 79354–79365.
  139. Daulatabad, R.; Vega, R.; Jaremko, J.; Kapur, J.; Hareendranathan, A.; Punithakumar, K. Integrating User-Input into Deep Convolutional Neural Networks for Thyroid Nodule Segmentation. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico City, Mexico, 1–5 November 2021; Volume 2021, pp. 2637–2640.
  140. Chen, Y.; Wang, Y.; Cai, Z.; Jiang, M. Predictions for central lymph node metastasis of papillary thyroid carcinoma via CNN-based fusion modeling of ultrasound images. Trait. Du Signal 2021, 38, 629–638.
  141. Vadhiraj, V.; Simpkin, A.; O’connell, J.; Singh Ospina, N.; Maraka, S.; O’keeffe, D. Ultrasound image classification of thyroid nodules using machine learning techniques. Medicina 2021, 57, 527.
  142. Sharifi, Y.; Bakhshali, M.; Dehghani, T.; DanaiAshgzari, M.; Sargolzaei, M.; Eslami, S. Deep learning on ultrasound images of thyroid nodules. Biocybern. Biomed. Eng. 2021, 41, 636–655.
  143. Turk, G.; Ozdemir, M.; Zeydan, R.; Turk, Y.; Bilgin, Z.; Zeydan, E. On the identification of thyroid nodules using semi-supervised deep learning. Int. J. Numer. Methods Biomed. Eng. 2021, 37, e3433.
  144. Gild, M.; Chan, M.; Gajera, J.; Lurie, B.; Gandomkar, Z.; Clifton-Bligh, R. Risk stratification of indeterminate thyroid nodules using ultrasound and machine learning algorithms. Clin. Endocrinol. 2021, 96, 646–652.
  145. Gulame, M.; Dixit, V.; Suresh, M. Thyroid nodules segmentation methods in clinical ultrasound images: A review. Mater. Today Proc. 2021, 45, 2270–2276.
  146. Gomes Ataide, E.; Ponugoti, N.; Illanes, A.; Schenke, S.; Kreissl, M.; Friebe, M. Thyroid nodule classification for physician decision support using machine learning-evaluated geometric and morphological features. Sensors 2020, 20, 6110.
  147. Zhou, H.; Wang, K.; Tian, J. Online Transfer Learning for Differential Diagnosis of Benign and Malignant Thyroid Nodules with Ultrasound Images. IEEE Trans. Biomed. Eng. 2020, 67, 2773–2780.
  148. Sun, C.; Zhang, Y.; Chang, Q.; Liu, T.; Zhang, S.; Wang, X.; Guo, Q.; Yao, J.; Sun, W.; Niu, L. Evaluation of a deep learning-based computer-aided diagnosis system for distinguishing benign from malignant thyroid nodules in ultrasound images. Med. Phys. 2020, 47, 3952–3960.
  149. Ma, X.; Xi, B.; Zhang, Y.; Zhu, L.; Sui, X.; Tian, G.; Yang, J. A machine learning-based diagnosis of thyroid cancer using thyroid nodules ultrasound images. Curr. Bioinform. 2020, 15, 349–358.
  150. Stib, M.; Pan, I.; Merck, D.; Middleton, W.; Beland, M. Thyroid Nodule Malignancy Risk Stratification Using a Convolutional Neural Network. Ultrasound Q. 2020, 36, 164–172.
  151. Yu, X.; Wang, H.; Ma, L. Detection of thyroid nodules with ultrasound images based on deep learning. Curr. Med. Imaging 2020, 16, 174–180.
  152. George, M.; Anita, H. Analysis of Kidney Ultrasound Images Using Deep Learning and Machine Learning Techniques: A Review. Lect. Notes Netw. Syst. 2022, 317, 183–199.
  153. Ma, L.; Dong, M.; Li, G.; Liu, J.; Wu, J.; Lu, H.; Zou, G.; Zhuo, L.; Mou, S.; Zheng, M. Predicting renal diseases with deep learning model based on shear wave elastography and convolutional neural network. Chin. J. Med. Imaging Technol. 2021, 37, 919–922.
  154. Patil, S.; Choudhary, S. Deep convolutional neural network for chronic kidney disease prediction using ultrasound imaging. Bio-Algorithms Med.-Syst. 2021, 17, 137–163.
  155. Sudharson, S.; Kokil, P. Computer-aided diagnosis system for the classification of multi-class kidney abnormalities in the noisy ultrasound images. Comput. Methods Programs Biomed. 2021, 205, 106071.
  156. De Jesus-Rodriguez, H.; Morgan, M.; Sagreiya, H. Deep Learning in Kidney Ultrasound: Overview, Frontiers, and Challenges. Adv. Chronic Kidney Dis. 2021, 28, 262–269.
  157. Herle, H.; Padmaja, K. Machine Learning Based Techniques for Detection of Renal Calculi in Ultrasound Images. In Proceedings of the Communications in Computer and Information Science, Nashik, India, 23–24 April 2021; 1440 CCIS. Springer: Berlin/Heidelberg, Germany, 2021; pp. 452–462.
  158. Shi, S. A novel hybrid deep learning architecture for predicting acute kidney injury using patient record data and ultrasound kidney images. Appl. Artif. Intell. 2021, 35, 1329–1345.
  159. Alex, D.; Chandy, D. Exploration of a framework for the identification of chronic kidney disease based on 2d ultrasound images: A survey. Curr. Med. Imaging 2021, 17, 464–478.
  160. Li, G.; Liu, J.; Wu, J.; Tian, Y.; Ma, L.; Liu, Y.; Zhang, B.; Mou, S.; Zheng, M. Diagnosis of renal diseases based on machine learning methods using ultrasound images. Curr. Med. Imaging 2021, 17, 425–432.
  161. Sudharson, S.; Kokil, P. An ensemble of deep neural networks for kidney ultrasound image classification. Comput. Methods Programs Biomed. 2020, 197, 105709.
  162. Ma, F.; Sun, T.; Liu, L.; Jing, H. Detection and diagnosis of chronic kidney disease using deep learning-based heterogeneous modified artificial neural network. Future Gener. Comput. Syst. 2020, 111, 17–26.
  163. Sagreiya, H.; Akhbardeh, A.; Li, D.; Sigrist, R.; Chung, B.; Sonn, G.; Tian, L.; Rubin, D.; Willmann, J. Point Shear Wave Elastography Using Machine Learning to Differentiate Renal Cell Carcinoma and Angiomyolipoma. Ultrasound Med. Biol. 2019, 45, 1944–1954.
  164. Zheng, Q.; Furth, S.; Tasian, G.; Fan, Y. Computer-aided diagnosis of congenital abnormalities of the kidney and urinary tract in children based on ultrasound imaging data by integrating texture image features and deep transfer learning image features. J. Pediatr. Urol. 2019, 15, 75.e1–75.e7.
  165. Yin, S.; Peng, Q.; Li, H.; Zhang, Z.; You, X.; Liu, H.; Fischer, K.; Furth, S.; Tasian, G.; Fan, Y. Multi-instance Deep Learning with Graph Convolutional Neural Networks for Diagnosis of Kidney Diseases Using Ultrasound Imaging. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Shenzhen, China, 17 October 2019; 11840 LNCS; Springer: Berlin/Heidelberg, Germany, 2019; pp. 146–154.
More
Upload a video for this entry
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : Antonio IULA , Monica Micucci
View Times: 644
Revisions: 2 times (View History)
Update Date: 20 Jun 2022
Notice
You are not a member of the advisory board for this topic. If you want to update advisory board member profile, please contact office@encyclopedia.pub.
OK
Confirm
Only members of the Encyclopedia advisory board for this topic are allowed to note entries. Would you like to become an advisory board member of the Encyclopedia?
Yes
No
${ textCharacter }/${ maxCharacter }
Submit
Cancel
There is no comment~
${ textCharacter }/${ maxCharacter }
Submit
Cancel
${ selectedItem.replyTextCharacter }/${ selectedItem.replyMaxCharacter }
Submit
Cancel
Confirm
Are you sure to Delete?
Yes No
Academic Video Service