Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 1249 word(s) 1249 2021-12-16 03:46:38 |
2 The format is correct. Meta information modification 1249 2021-12-16 08:56:30 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Kara, A. Detection/Classification of Knee Injuries from MR Images. Encyclopedia. Available online: https://encyclopedia.pub/entry/17183 (accessed on 17 November 2024).
Kara A. Detection/Classification of Knee Injuries from MR Images. Encyclopedia. Available at: https://encyclopedia.pub/entry/17183. Accessed November 17, 2024.
Kara, Ali. "Detection/Classification of Knee Injuries from MR Images" Encyclopedia, https://encyclopedia.pub/entry/17183 (accessed November 17, 2024).
Kara, A. (2021, December 16). Detection/Classification of Knee Injuries from MR Images. In Encyclopedia. https://encyclopedia.pub/entry/17183
Kara, Ali. "Detection/Classification of Knee Injuries from MR Images." Encyclopedia. Web. 16 December, 2021.
Detection/Classification of Knee Injuries from MR Images
Edit

Magnetic resonance imaging (MRI) is a technique for mapping the interior structure of the body as well as specific aspects of functioning. 

deep learning transfer learning ResNet50 convolutional neural networks magnetic resonance imaging MRNet

1. Introduction

The application of artificial intelligence in the healthcare industry has grown substantially in recent years [1] since it may enhance diagnostic accuracy, boost efficiency in workflow and operations, and make monitoring of the patient’s suffering easier. In the healthcare industry, computer-based technology is provided through collecting digitised data in fields [2], such as computed tomography (CT) [3], magnetic resonance imaging (MRI), X-ray [4], and ultrasound [5][6]. However, most of the best examples of the high performance of deep learning are in the field of computer vision by examining medical images and videos [7]. With breakthroughs in deep learning and image processing [8][9], it has the potential to recognise and locate complex patterns from several radiological imaging modalities, many of which even have recently demonstrated performance comparable to human decision making [1]. When reading medical images, radiologists frequently scan the entire image to locate lesions, analyse and quantify their attributes, and then define them in the report. This normal procedure takes a long time. More critically, certain important abnormal outcomes may go unnoticed by human readers [1]. Technological advancements have made it possible to generate high-resolution magnetic resonance (MR) images in a short amount of time, allowing for faster scanning. The employment of deep learning approaches to diagnose utilising MRI data from diverse parts of the body is very common [10][11][12][13].
The MR image is produced by keeping the patient inside a gigantic magnet that induces a relatively powerful external magnetic field [14]. Geometric planes used to section the body into pieces are called body planes. They are commonly used to define the placement or orientation of bodily structures in both human and animal anatomy. Anatomical terminology refers to reference planes as standard planes. The body is divided into dorsal and ventral portions by the coronal plane (front or Y-X plane). The axial plane (axial or X-Z plane) divides the body into superior and inferior (head and tail) portions. It is usually a horizontal plane that is parallel to the ground and runs through the centre of the body. The sagittal plane (lateral or Y-Z plane) divides the body into sinister and dexter sides [15]Figure 1 shows these axes.
Figure 1. (a) Sagittal plane; (b) coronal plane; (c) axial plane [16].

2. Selecting Eligible Data

The adoption of standardised datasets and benchmarks for assessing machine learning algorithms has been a crucial factor in the advancement of computer vision. The scope of the tasks being addressed is defined by these datasets, which frequently include imaging data with human comments [7]. Transfer learning aims to maximise learning performance in the targeted field by storing acquired knowledge while addressing a problem and then transferring that knowledge that has been found in different but related fields. This may make it possible to avert the necessity for a large amount of data for the field targeted to be trained [17]. ImageNet [18] is a high-profile example of a large imaging dataset utilised to benchmark and examine the relative performance of machine learning models in the still image dataset field [19].
When examining the studies on modern convolutional neural networks, LeNet [20], developed by LeCun et al. in 1998; AlexNet [21], created by Krizkevsky et al. in 2012; VGGNet [22], produced by Simonyan and Zisserman at the Visual Geometry Group (VGG) laboratory in 2014; and residual networks (ResNet), built by He et al. in 2015, can be cited. Prior to ResNet, it was believed that increasing the number of convolutional layers would enhance the performance rate. On the contrary, increasing the network depth led to the vanishing gradient problem. The vanishing gradient issue appears when the impact of the initial layers is considerably weakened during backpropagation since the impact of the multipliers is drastically diminished. This problem is addressed by establishing shortcut connections between layers in the ResNet model. The vanishing gradient problem is handled by using these shortcuts to transfer the gradient values in the model after the residual blocks.
Multiple images are added one after the other to make up the MRI data. Selecting the eligible images to diagnose the disease while working on these images will improve the efficiency of the study. In addition, the accuracy of the classification process was checked, and the diagnosis of over noisy and damaged images were identified while selecting the eligible images.

3. Selecting the Relevant Area

Selecting the relevant regions would improve the accuracy of the study by focusing on the region where the disease will be diagnosed rather than looking at the entire image during the diagnostic phase. The study by Saygılı et al. [23] aimed to search the meniscus structures in a completely narrowed area with regard to this, which they performed by manually marking the MR images. They analysed a dataset that included both healthy and unhealthy MR images with varying degrees of discomfort. The dataset they employed contained 88 MR images that were labelled by radiologists. The study investigated the effect of feature extraction methods of histograms of oriented gradients (HOG) [24] and local binary pattern (LBP) [25]. HOG and LBP are feature extraction methodologies that produce successful results. HOG is a feature extraction methodology that delivers accomplished image recognition results. In this method, the feature is extracted with the gradients and orientations of the pixels on the image. 
The ResNet50 model, which was pre-trained through ImageNet, is suitable for working with three-channel images [26]. The images in the MRNet dataset have the sizes 256 × 256 × 1. Therefore, one-channel images must be converted to three-channel ones. During this conversion, a function was built to convert all the images into three-channel images. Every time a new one-channel visual arrives at a function, the colour value of each pixel is repeated three times in an empty matrix of 256 × 256 × 3 dimensions within the function to generate a new image, according to the working logic of the function. Figure 2 demonstrates the conversion of a one-channel image to a three-channel.
Figure 2. The function used to increase the number of channels and, an example of a 256 × 256 × 1 coronal image, which was converted to 256 × 256 × 3.
The exponential decay learning rate was employed to develop a learning rate that drops as the training progresses. The initial learning rate was initially set at 0.01 and declined with a decay rate of 0.96 at 4000 decay steps. Stochastic gradient descent (SGD) was used as the optimiser. It is known that SGD has large oscillations [27]. Li et al. [28], on the other hand, demonstrated the SGD oscillation variation when the exponential decay is employed and utilised with different momentum values. Figure 3 shows the model’s structure as well as its parameter values.
Figure 3. General structure of image classification model.
Figure 4a depicts the classification of images pertaining to a patient in a classification process; Figure 4b shows the confusion matrix produced after the training of sagittal images as a result of the data classification test of 10 patients, while Figure 4c provides the accuracy of 40 epochs and Figure 4d gives the train and validation loss values.
Figure 4. (a) Classification process of sagittal MRI; (b) confusion matrix of the produced validation dataset for sagittal plane; (c) training and validation accuracy rates; (d) training and validation loss values.
Upon the examination of sagittal train and valid images, six trains and two valid data appeared to be not selected. The unselected images were observed to be misclassified and over noisy images. Table 2 contains a list of unselected images, and Figure 5 shows one example of unselected images.
Figure 5. Examples of unselected images and patients’ numbers in the dataset: (a) 0003; (b) 0370; (c) 0544; (d) 0582; (e) 0665; (f) 0776; (g) 1159; (h) 1230.

References

  1. Kaul, V.; Enslin, S.; Gross, S.A. History of artificial intelligence in medicine. Gastrointest. Endosc. 2020, 92, 807–812.
  2. Jin, D.; Harrison, A.P.; Zhang, L.; Yan, K.; Wang, Y.; Cai, J.; Miao, S.; Lu, L. Artificial intelligence in radiology. In Artificial Intelligence in Medicine, 1st ed.; Xing, L., Giger, M.L., Min, J.K., Eds.; Academic Press: London, UK, 2020; pp. 668–727.
  3. Amarasinghe, K.C.; Lopes, J.; Beraldo, J.; Kiss, N.; Bucknell, N.; Everitt, S.; Jackson, P.; Litchfield, N.; Denehy, L.; Blyth, B.J.; et al. A Deep Learning Model to Automate Skeletal Muscle Area Measurement on Computed Tomography Images. Front. Oncol. 2021, 11, 1135.
  4. Seo, J.W.; Lim, S.H.; Jeong, J.G.; Kim, Y.J.; Kim, K.G.; Jeon, J.Y. A deep learning algorithm for automated measurement of vertebral body compression from X-ray images. Sci. Rep. 2021, 11, 13732.
  5. Liu, S.; Wang, Y.; Yang, X.; Lei, B.; Liu, L.; Li, S.X.; Ni, D.; Wang, T. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering 2019, 5, 261–275.
  6. Tastan, A.; Hardalaç, N.; Kavak, S.B.; Hardalaç, F. Detection of Fetal Reactions to Maternal Voice Using Doppler Ultrasound Signals. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 28–30 September 2018; pp. 1–6.
  7. Ouyang, D.; Wu, Z.; He, B.; Zou, J. Deep learning for biomedical videos: Perspective and recommendations. In Artificial Intelligence in Medicine, 1st ed.; Xing, L., Giger, M.L., Min, J.K., Eds.; Academic Press: London, UK, 2020; pp. 132–162.
  8. Sreelakshmi, D.; Inthiyaz, S. A Review on Medical Image Denoising Algorithms. Biomed. Signal Process. Control. 2020, 61, 102036.
  9. Liu, H.; Rashid, T.; Ware, J.; Jensen, P.; Austin, T.; Nasrallah, I.; Bryan, R.; Heckbert, S.; Habes, M. Adaptive Squeeze-and-Shrink Image Denoising for Improving Deep Detection of Celebral Microbleeds. In Proceedings of the MICCAI2021–24th International Conference on Medical Image Computing & Computer Assisted Intervention, Strasbourg, France, 27 September–1 October 2021.
  10. Giovanni, B.; Olivier, L.M. Artificial Intelligence in Medicine: Today and Tomorrow. Front. Med. 2020, 7, 27.
  11. McBee, M.P.; Awan, O.A.; Colucci, A.T.; Ghobadi, C.W.; Kadom, N.; Kansagra, A.P.; Tridandapani, S.; Aufdermann, W.F. Deep Learning in Radiology. Acad. Radiol. 2018, 25, 1472–1480.
  12. Ueda, D.; Shimazaki, A. Technical and clinical overview of deep learning in radiology. Jpn. J. Radiol. 2019, 37, 15–33.
  13. Montagnon, E.; Cerny, M.; Cadrin-Chênevert, A.; Vincent, H.; Thomas, D.; Ilinca, A.; Vandenbroucke-Menu, F.; Turcotte, S.; Kadoury, S.; Tang, A. Deep learning workflow in radiology: A primer. Insights Imaging 2019, 11, 22.
  14. Fatahi, M.; Speck, O. Magnetic resonance imaging (MRI): A review of genetic damage investigations. Mutat. Res. Rev. Mutat. Res. 2015, 764, 51–63.
  15. Body Planes and Sections. Available online: https://med.libretexts.org/@go/page/7289 (accessed on 10 November 2021).
  16. Anatomical Planes of the Body. Available online: https://www.spineuniverse.com/anatomy/anatomical-planes-body (accessed on 10 November 2021).
  17. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2021, 109, 43–76.
  18. Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai, L.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), Miami, FL, USA, 20–25 June 2009; pp. 248–255.
  19. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, Z.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J Comput Vis 2015, 115, 211–252.
  20. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Rocognition. Proc. IEEE 1998, 86, 2278–2324.
  21. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105.
  22. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015; Available online: https://arxiv.org/abs/1409.1556 (accessed on 25 November 2021).
  23. Saygılı, A.; Varlı, S. Automated Diagnosis of Meniscus Tears from MRI of the Knee. Int. Sci. Vocat. Stud. J. 2019, 3, 92–104.
  24. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005.
  25. Song, K.; Yan, Y.; Chen, W.; Zhang, X. Research and Perspective on Local Binary Pattern. Acta Autom. Sin. 2013, 39, 730–744.
  26. ResNet and ResNetV2. Available online: https://keras.io/api/applications/resnet/ (accessed on 25 November 2021).
  27. Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2017, arXiv:1609.04747.
  28. Li, Z.; Arora, S. An Exponential Learning Rate Schedule for Deep Learning. arXiv 2019, arXiv:1910.07454v3.
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 599
Revisions: 2 times (View History)
Update Date: 16 Dec 2021
1000/1000
ScholarVision Creations