Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 1174 word(s) 1174 2022-01-14 07:38:35 |
2 format is correct Meta information modification 1174 2022-01-17 02:28:16 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Zhang, Y. PeMNet for Pectoral Muscle Segmentation. Encyclopedia. Available online: https://encyclopedia.pub/entry/18252 (accessed on 27 April 2024).
Zhang Y. PeMNet for Pectoral Muscle Segmentation. Encyclopedia. Available at: https://encyclopedia.pub/entry/18252. Accessed April 27, 2024.
Zhang, Yudong. "PeMNet for Pectoral Muscle Segmentation" Encyclopedia, https://encyclopedia.pub/entry/18252 (accessed April 27, 2024).
Zhang, Y. (2022, January 14). PeMNet for Pectoral Muscle Segmentation. In Encyclopedia. https://encyclopedia.pub/entry/18252
Zhang, Yudong. "PeMNet for Pectoral Muscle Segmentation." Encyclopedia. Web. 14 January, 2022.
PeMNet for Pectoral Muscle Segmentation
Edit

Deep learning has become a popular technique in modern computer-aided (CAD) systems. In breast cancer CAD systems, breast pectoral segmentation is an important procedure to remove unwanted pectoral muscle in the images. This entry proposes a novel deep learning segmentation framework to provide fast and accurate pectoral muscle segmentation result. In the proposed framework, the novel network architecture enables more useful information to be used and therefore improve the segmentation results. 

pectoral segmentation deep learning

1. Introduction

The contribution of newly developed therapies on reducing mortality rate, breast mammography, a gold standard in the community, has also significantly improved survival due to earlier detection and is of great significance. While there are numerous modalities for breast imaging, mammography is considered to be one of the most effective methods given the feasibility and performance.

The advancement of technology transformed mammography procedures from radiography-based films form to digital form, which was known as full-field digital mammography (FFDM). The advantage of digital mammography is that radiologists are able to magnify mammograms or change the brightness or contrast of mammograms for better interpretation. Another reason digital mammography has gained in popularity is that it is cheap, while acquired images can be stored as Digital Imaging and Communications in Medicine (DICOM) files. Usually, a breast is imaged in two projection planes including Cranio-Caudal (CC) and Medio-Lateral-Oblique (MLO) and in two sides, which leads to LCC, RCC, LMLO, and RMLO, equaling four images. The mammography images are often inspected by a specialist towards identification of abnormalities and localization. However, the complexity of breast tissue and subtlety of cancer in early stages are intrinsic challenges in interpreting mammograms, which itself is a time-consuming task. 

Computer-aided systems (CADs) for breast cancer analysis have emerged as an attractive alternative. These systems aim to automatically locate and classify abnormalities in mammograms so that radiologists are able to improve their efficiency. Regarding the analysis tasks, CAD systems can be broadly classified into computer-aided detection (CADe), which is mainly responsible for breast abnormality detection (such as breast mass and calcification) and computer-aided diagnosis (CADx) systems that focus on classifying the detected abnormalities or entire images into one of several categories. These two systems can be integrated to form an end-to-end system for higher efficiency, but they can also be separated for specific applications.
Before the prevalence of deep convolutional neural networks (CNNs)-based CAD systems, mammography-based CAD systems for breast cancer analysis mainly consisted of four steps including pre-processing, segmentation, feature extraction and analysis. Pre-processing, which is a crucial step before analysis as the quality of input images possibly determines the bottleneck of subsequent modules, enhances the desired features in the images while depresses the unwanted natures. Segmentation, which plays a key role in image analysis, remains a challenging task while considerable efforts using traditional methods such as threshold methods and active contours-based methods have been made [1]. After segmentation, meaningful features, such as edges and shapes, are extracted by feature extraction and then used for final diagnosis. With the development of deep learning, segmentation, feature extraction and classification can be simply integrated into one single deep learning model. Pre-processing, however, remains too large a topic to be included in single models. For breast cancer analysis, pre-processing mainly includes image enhancement and breast region segmentation. Image enhancement, especially for medical images, is generally applied to improve the brightness, contrast, saturation of images. Given that the size of a mammography image can be thousands by thousands of pixels, breast region segmentation will benefit CAD systems by narrowing down the regions that should be focused on while the efficiency of those systems can be improved as smaller numbers of pixels are involved in computation. The pectoral muscle, which is commonly shown in MLO viewed mammograms, is usually removed before analysis as it can be easily misclassified as fibroglandular tissues. Additionally, artefacts that are accidentally produced during image acquisition may show in pectoral muscle areas of mammography images. Moreover, pectoral muscle regions can be examined by radiologists for auxiliary lymph abnormalities. Aimed at developing a robust and highly efficient breast pectoral muscle segmentation system, this entry developed an automatic segmentation framework named PeMNet. Inspired by the work [2][3], the possibility of combining channel attention architecture with segmentation frameworks was further explored.

2. Research Findings

Given the importance of breast pectoral segmentation, many efforts ranging from traditional methods to the state-of-the art deep CNNs methods have been performed. However, it remains a problem that must be resolved. One main issue concerning breast pectoral segmentation is the lack of large-scale well-annotated datasets for training of high performance models. In recent years, considerable effort has been devoted to developing intelligent and robust methods for breast pectoral segmentation. However, the majority of the methods are evaluated on self-annotated public datasets or even private datasets due to the limited availability of datasets. In this entry, the segmentation framework was evaluated both on access limited dataset, i.e., OPTIMAM and on a public dataset named INbreast. Based on Deeplabv3+ model, the proposed novel attention module was integrated into PeMNet for image segmentation task. Compared to traditional methods that suffered from poor performance, the proposed method turned out to be more reliable with higher performance. Compared to the deep CNN based methods, the proposed novel PeMNet still offers the architectural novelty while the performance of our model remains to be the best performing one compared to other methods.
Another issue with the models for pectoral segmentation is the robustness of the methods. Before the advent of deep learning, feature-based methods dominated the field. However, the robustness of these kinds of systems remain to be improved as minor changes in the images could lead to failure of the systems. Therefore, the advantage of deep learning-based methods is such that the robustness has been drastically enhanced. In terms of robustness, the proposed segmentation framework has been proven to be robust against various situations and turned out to be suitable for pectoral muscle segmentation tasks.

3. Conclusions

In this entry, an automatic breast pectoral segmentation model named PeMNet was successfully developed for mammogram pre-processing in mammography image analysis. The key of the model is the proposed novel attention model that was architecturally friendly to deep CNNs and therefore can be easily repurposed for new computer vision tasks. By integrating the attention module, the proposed PeMNet framework showed highest performance on pectoral muscle segmentation.
Nevertheless, there are still some limitations to this study. One problem is the effectiveness of the proposed attention module remains to be improved. As can be seen from the experiment, the PeMNet with shallow deep CNNs backbones performed even worse than Deeplabv3+ models with same backbones. The reason could be from the dataset perspective as the datasets for validation are still quite small. The publicly available datasets for breast pectoral segmentation are quite limited. Therefore, the proposed attention module on larger-scale datasets may be validated in future. However, there is still further work that can be done from the perspective of architecture as further exploration on architecture should be done. Another issue is the choice of backbones for the segmentation model. 

References

  1. Li, C.; Huang, R.; Ding, Z.; Gatenby, J.C.; Metaxas, D.N.; Gore, J.C. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans. Image Process. 2011, 20, 2007–2016.
  2. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19.
  3. Lou, M.; Wang, R.; Qi, Y.; Zhao, W.; Xu, C.; Meng, J.; Deng, X.; Ma, Y. MGBN: Convolutional neural networks for automated benign and malignant breast masses classification. Multimed. Tools Appl. 2021, 80, 26731–26750.
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 351
Revisions: 2 times (View History)
Update Date: 17 Jan 2022
1000/1000