Deep Learning in Different Ultrasound Methods for BC: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis.

  • deep learning
  • ultrasound modalities
  • breast cancer

1. Introduction

Breast cancer is the leading cause of cancer worldwide and the second leading cause of death among women [1]. Ultrasound (US) is used in conjunction with mammography to screen for and diagnose breast mass, particularly in dense breasts. US has the potential to reduce the overall cost of breast cancer management as well as it can reduce benign open biopsies by facilitating fine needle aspiration, which is preferable because of its high sensitivity, specificity, and limited invasiveness [2,3,4,5]. The BI-RADS classification helps distinguish patients who need followup imaging from patients who require diagnostic biopsy [6] Moreover, intraoperative use can localize breast cancer in a cost-effective fashion and reduces the tumor-involved margin rate, eventually reducing the costs of additional management [7,8]. However, one of the major disadvantages of ultrasonography is high operator dependency, which increases the false-negative rate [9].
Thus, deep learning may come into play in reducing the manual workload of operators, creating a new role for doctors. Incorporation of deep learning models into ultrasound may have the potential to reduce the false-negative rate and reduce the overall cost of breast cancer management. It can help physicians and patients make prompt decisions by detecting, diagnosing, and monitoring the prognosis and treatment progress with considerable accuracy and time efficiency. This possibility has created considerable enthusiasm, but it also needs critical evaluation.
There have been several review papers published in the last decade on the role of deep learning in ultrasound for breast cancer segmentation and classification. They mostly combined deep learning models with B mode, shear wave elastography (SWE), color Doppler images, and sometimes with other imaging modalities [10,11,12,13,14,15]. Several surveys have been published on deep learning and machine learning models with B mode and SWE images, as well as multimodality images for breast cancer classification [16,17,18]. There are several concerns, such as bias in favor of the new model and whether the findings are generalizable and applicable to real-world settings. There are a considerable number of deep learning models developed to study breast cancer automatic segmentation and classification, but there is a lack of data on how they are improving the overall management of breast cancer, starting from screening to diagnosis and ultimately to survival. There are insufficient data on which modes of ultrasound are being used for deep learning algorithms as well.
This article reviews the current research trends on deep learning models in different ultrasound modalities for breast cancer management, from screening to diagnosis to prognosis, and the future challenges and directions of the application of these models.

2. Imaging Modalities Used in Breast Lesions

Various imaging modalities are used to diagnose breast masses. Self-examination, mammography, and ultrasound are usually used for screening, and if a mass is found, ultrasonography and/or MRI are usually preformed to evaluate the lesion [19]. Ultrasound has been used in various stages of breast cancer management including screening of dense breasts, diagnosis, and prognosis during chemotherapy due to its noninvasive nature, nonuse of ionizing radiation, portability, real-time nature to enable guidance for biopsies, and cost-effectiveness.

3. Computer-Aided Diagnosis and Machine Learning in Breast Ultrasound

Computer-aided diagnosis (CAD) can combine the use of machine learning and deep learning models and multidisciplinary knowledge to make a diagnosis of a breast mass [22]. Handheld US has been supplemented with automated breast US (ABUS) to reduce intraoperator variability [23]. The impact of 3D ABUS as a screening modality has been investigated for breast cancer detection in dense breasts as the CAD system substantially decreases interpretation time [23]. In the case of diagnosis, several studies have shown that 3D ABUS can help in the detection of breast lesions and the distinction of malignant from benign lesions [24], predicting the extent of breast lesion [25], monitoring response to neoadjuvant chemotherapy [26], and correlating with molecular subtypes of breast cancer [27], with a high interobserver agreeability [23,28]. A study proposed a computer-aided diagnosis system using a super-resolution algorithm and used a set of low-resolution images to reconstruct a high-resolution image to improve the texture analysis methods for breast tumor classification [29].
In machine learning, features are discerned and encoded by expert humans that may appear distinctive in the data and organized or segregated with statistical techniques according to these features [30,31]. Research on various machine learning models for the classification of benign and malignant breast masses has been published in the past decade [32]. Most recent papers used the k-nearest neighbors algorithm, support vector machine, multiple discriminant analysis, Probabilistic-ANN (Artificial Neural Network), logistic regression, random forest, decision tree, naïve Bayes and AdaBoost for diagnosis and classification of breast mass, binary logistic regression for classification of BI-RADS category 3a, and linear discriminate analysis (LDA) for analysis of axillary lymph node status in breast cancer patients [32,33,34,35,36,37].

4. What Is Deep Learning and How It Is Different

Deep learning (DL) is part of a broader family of machine learning methods that mimic the way the human brain learns. DL utilizes multiple layers to gather knowledge, and the convolution of the learned features increases in a sequential layer-wise manner [30]. Unlike machine learning, deep learning requires little to no human intervention and uses multiple layers instead of a single layer. DL algorithms have also been applied in cancer images from various modalities to make a diagnosis or classification, lesion segmentation, etc. [38]. These algorithms have been used to incorporate various clinical or histopathological data to make cancer diagnoses as well in some studies.
There are various types of convolutional neural networks. The important parts of CNNs are the input layer, output layer, convolutional layers, max-pooling layers, and fully connected layers [30,39]. The input layer should be the same as the raw or input data [30,39]. The output layer should be the same as the teaching data [30,39]. In the case of classification tasks, the unit numbers in the output layer must be the same as the category numbers in the teaching data [30,39]. The layers which are present between the input and the output layers are called hidden layers [30,39].
These multiple convolutional, fully connected, and pooling layers facilitate the learning of more features [30,39]. Usually, the convolution layer, after extracting a feature from the input image, passes to the next layer [30,39]. Convolution maintains the relationships between the pixels and results in activation [30,39]. The recurrent application of a similar filter to the input creates a map of activation, called a feature map, which facilitates revealing the intensity and location of the features recognized in the input [30,39]. The pooling layers adjust the spatial size of the activation signals to minimize the possibility of overfitting [30,39]. Spatial pooling is similar to downsampling, which adjusts the dimensionality of each map, retaining important information. Max pooling has been the commonest type of spatial pooling [30,39].
The function of a fully connected layer is to obtain the results from the convolutional/pooling layers and utilize them to classify the information such as images into labels [30,39]. Fully connected layers help connect all neurons in one layer to all neurons in the next layer through a linear transformation process [30,39]. The signal is then output via an activation function to the next layer of neurons [30,39]. The rectified linear unit (Relu) function is commonly used as the activation function, which is a nonlinear transformation [30,39]. The output layer is the final layer producing the given outputs [30,39]. 

5. IoT Technology in Breast Mass Diagnosis

Recently, the Industrial Internet of Things (IIoT) has emerged as one of the fastest-developing networks able to collect and exchange huge amounts of data using sensors in the medical field [40]. When it is used in the therapeutic or surgical field, it is sometimes termed the “Internet of Medical Things” (IoMT) or the “Internet of Surgical Things” (IoST), respectively [41,42,43,44]. IoMT implies a networked infrastructure of medical devices, applications, health systems, and services. It assesses the physical properties by using portable gadgets with integration into AI methods, often enabling wireless and remote devices [45,46]. This technology is improving remote patient monitoring, diagnosis of diseases, and efficient treatment via telehealth services maintained by both patients and caregivers [47]. Ragab et al. [48], developed an ensemble deep learning-based clinical decision support system for breast cancer diagnosis using ultrasound images.
Singh et al. introduced an IoT-based deep learning model to diagnose breast lesions using pathological datasets [49]. A study suggested a sensor system using temperature datasets has the potential to identify early breast mass with a wearable IoT jacket [50]. One study proposed an IoT-cloud-based health care (ICHC) system framework for breast health monitoring [51]. Peta et al. proposed an IoT-based deep max-out network to classify breast mass using a breast dataset [52]. However, most of these studies did not specify what kind of dataset they used. Image-Guided Surgery (IGS) using IoT networks may have the potential to improve surgical outcomes in surgeries where maximum precision is required in anatomical landmark tracking and instruments as well [44]. However, there is no study on IoST-based techniques involving breast US datasets.

This entry is adapted from the peer-reviewed paper 10.3390/cancers15123139

This entry is offline, you can click here to edit this entry!
ScholarVision Creations