Diabetic Retinopathy Detection by Deep-Learning Technique: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Medical professionals can benefit greatly from deep-learning-based systems that automate the interpretation of retinal pictures and provide objective and consistent assessments of the severity of DR. DL-based screening methods can test a large diabetic population for DR. Deep-learning algorithms can monitor the course of diseases over time by evaluating successive retinal images.

  • diabetic retinopathy
  • machine learning
  • deep learning
  • artificial intelligence

1. Introduction

DR is a retinal complication of diabetes [1]. It impairs or completely degrades an individual’s vision. Uncontrolled diabetes over an extended length of time increases the risk of visual impairment due to diabetic maculopathy [2,3,4]. Retinal capillaries are susceptible to damage from high blood sugar. In a longer period, this deterioration leaves blood vessels more vulnerable to further damage or even rupture [5]. The risk of DR depends on diabetes duration, blood sugar management, genetic susceptibility, hypertension, and lipid abnormalities [6,7,8]. DR is more likely to develop in type 1 and 2 diabetics with poor blood sugar management [9]. DR is the primary factor of irreversible blindness in individuals across the world [10]. In addition, DR contributes to serious disorders like proliferative DR, the most prevalent microvascular implication [11]. Early diagnosis is one of the crucial factors for reducing the severity of DR.
The field of ophthalmology relies heavily on analyzing blood vessel structures in retinal fundus images. Permanent vision loss can occur due to age-related macular degeneration and diabetic macular edema [11]. Optical Coherence Tomography (OCT) is a crucial tool for ophthalmologists in diagnosing DR [12]. Ophthalmologists must devote a considerable amount of time to detecting abnormalities. This is crucial to prevent or mitigate DR-related visual impairment. Adopting minimally invasive methods and robotic-assisted surgery has become increasingly prevalent in ophthalmology, especially for treatments such as cataract surgery and glaucoma treatment [11]. These advancements have demonstrated the ability to reduce patient discomfort and expedite healing. To increase the efficacy of ocular drugs, new drug delivery devices such as sustained-release implants and punctal plugs have been developed [12]. Utilizing teleophthalmology facilitates both the remote evaluation of retinal images and consultations with patients [12]. It has demonstrated enhanced accessibility to DR screening. Emerging imaging techniques like hyperspectral and multispectral imaging have demonstrated potential in the realm of early disease identification and tissue characterization [12]. Medical diagnosis and therapy have greatly benefited from developments in 3D and 4D medical imaging, which have enhanced the visualization of anatomical structures [12]. The implementation of portable and handheld imaging technologies has experienced a surge in popularity, as it allows healthcare practitioners to conduct imaging activities to provide effective treatments.
In contrast to more traditional procedures, such as the dilatation of the eye pupil, automated retinal image processing has greatly facilitated the diagnosis of retinal diseases [13]. In recent years, artificial intelligence (AI) and machine learning (ML) algorithms have made significant advancements in the automated identification and assessment of DR using retinal images. The primary objective of these systems is to optimize the early detection of medical conditions and increase the overall care and treatment of patients [14]. AI systems can examine retinal images and scans to identify the earliest stages of DR. These algorithms can detect and categorize DR severities [14]. The computerized screening procedure aids in making a timely diagnosis, which is essential for effective therapy. AI applications can help prioritize patients according to the severity of their conditions [15]. It can be used to evaluate large datasets of retinal images and patient information to improve DR detection strategies.
Fundus images, including OCT scans and ultrasonography, are widely applied to DR detection. These images cover blood vessels, the macula, and the retina’s interior part. The fundus camera provides high-quality retinal images. These images are used in deep-learning (DL) models for detecting abnormalities [16].
A convolutional neural network (CNN) is a subset of artificial neural network architecture. It is primarily used in processing videos and images [16]. In recent years, CNNs have played a pivotal role in advancing computer vision by assisting in resolving various visual recognition challenges [17]. CNNs are built from numerous distinct convolutional layers, each of which must learn and identify certain image characteristics or patterns. The computation of feature maps is the goal of convolutional operations, which entail shifting extremely small filters across the input image [18]. It is possible to fine-tune pre-trained CNN models for use in multiple applications. These models have been exposed to extensive data and have gained an enormous amount of knowledge in various feature domains. Transfer learning (TL) approaches present an exceptional outcome using smaller datasets [19]. To enhance feature extraction and prediction for DR detection and to overcome the limitation of unbalanced and noisy fundus image data, existing studies have employed many data-augmentation approaches, sampling techniques, cost-sensitive algorithms, and hybrid and ensemble architectures [20].
The large datasets, including MESSIDOR, EyePacs, and APTOS, provide the fundus images [21]. Ophthalmologists were involved in gathering the ground truth images. The researchers employ these datasets to generalize their DR-detection models [21]. The CNN-based DR-detection models are widely used to detect and grade DR severity. These models demand a higher number of computational resources for producing an outcome. There is a lack of lightweight CNN models for detecting DR severity. This motivated the author to develop a lightweight model for grading the DR-severity level using the fundus images. 

2. Diabetic Retinopathy Detection by Deep-Learning Technique

Medical professionals can benefit greatly from deep-learning-based systems that automate the interpretation of retinal pictures and provide objective and consistent assessments of the severity of DR [21]. DL-based screening methods can test a large diabetic population for DR. Deep-learning algorithms can monitor the course of diseases over time by evaluating successive retinal images [21]. As a result, physicians may fine-tune their approaches to treating patients. Implementing these technologies plays an essential role in augmenting the efficacy and proficiency of DR screening and therapy, ultimately yielding advantages for both patients and healthcare practitioners. Nagpal et al. (2022) [22] discussed the recent developments in the DR-detection models. The noise and low contrast levels of the images may reduce the DR-image-classification performance. The morphological changes in the retinal images are the key factors in detecting DR [23]. In addition, DR-detection models identify lesions to compute severity levels.
Orlanda et al. (2017) [23] proposed a DL-based lesion-detection model using ensemble values. Al-hazaimeh et al. (2022) [24] developed a multi-class classification model for detecting DR severity. They followed blood-vessel-based segmentation and optic-disc-based detection techniques for pre-processing the images. In addition, they applied feature extraction and selection techniques to improve the classifier accuracy. Suganyadevi et al. (2022) [25] proposed a DR-detection model for detecting the severity of the fundus images. They employed the CNN models for processing the images. The multi-class classifier achieved an optimal outcome. Similarly, Nahiduzzaman et al. (2023) [26] developed a DR-identification model using a parallel convolutional neural network. They used an extreme learning machine to extract the key patterns. They adjusted the CNN model’s parameters using hyperparameter optimization. They used a smaller number of parameters for classifying the fundus images.
Abbood et al. (2022) [27] developed a hybrid retinal image enhancement algorithm using the DL technique. They applied a retinal-cropping technique to extract the features. Gaussian blur and circle cropping were used to enhance the image quality. They employed a ResNet 50 model to classify the fundus images. Canayaz (2022) [28] proposed a classification technique to detect DR severity. Binary Bat algorithm, Equilibrium optimizer, Gray Wolf optimizer, and Gravity search algorithm were used for feature extraction. They used a Support Vector Machine and Random Forest for classifying DR-severity levels. Modi and Kumar (2022) [29] developed a DR-severity detection using a Bat-based feature selection algorithm. They employed a deep forest technique for image classification. The K-mean-based segmentation algorithm was used to identify the lesion region. The feature extraction was performed using a multi-grained scanning method. Dayanna and Emmanuel (2022) [30] proposed a grading system for identifying the severity levels of the fundus images. The coherence-enhancing energy-based regularized level set evolution was used for blood-vessel segmentation. An attention-based fusion network was employed to detect the candidate lesion region. They applied a deep CNN model to classify the fundus images.
Furthermore, Savelli et al. (2020) [31] employed the multi-context ensemble-based CNN for detecting lesions in the fundus images. Chetoui et al. (2020) [32] employed EfficientNet to identify the abnormalities. Karki et al. (2021) [33] proposed an integrated EfficientNet model for DR classification. Kajan et al. (2020) [34] proposed a CNN model for identifying DR. They followed the TL technique for classifying the images. Patil et al. (2020) [35] employed a TL technique for DR-severity grading. Tariq et al. (2022) [36] employed ResNet50 and DenseNet121 models for the DR-severity-level classification model. They utilized APTOS and EYEPACS datasets for evaluating the model. Kobat et al. (2022) [37] applied a pre-trained DenseNet model to grade the DR-severity levels. Luo et al. (2023) [38] built a DR-detection model using deep CNN. They used local mining and long-range dependence techniques for the image classification. Lastly, Ishtiaq et al. (2023) [39] proposed a hybrid technique for classifying the fundus images.
To train deep-learning models, it is necessary to have access to extensive datasets that are both sizable and of superior quality. It can be challenging to obtain a diverse and representative dataset of retinal images, especially when dealing with rare DR conditions. The presence of imbalanced data may influence the model to produce more false positives [39]. It presents challenges in identifying severe instances of DR. Interoperability and user-friendly interfaces for healthcare professionals are essential to integrate DL models into clinical settings and EHR systems [39]. Processing retinal images in real time for prompt diagnosis and prioritization in telemedicine or point-of-care environments can impose a significant demand on resources and require specialist technology. The existing CNN models, including VGG, ResNet, and DenseNet models, demand huge computational resources for classifying DR severities [39]. There is a demand for lightweight applications to overcome the shortcomings of the existing models and to detect DR severities with limited computational resources.
In a study (Wahab Sait, A.R. A Lightweight Diabetic Retinopathy Detection Model Using a Deep-Learning Technique. Diagnostics 2023, 13, 3120. https://doi.org/10.3390/diagnostics13193120 ), the author developed a multi-class DR-severity grading model using the DL technique. The proposed model integrated the image pre-processing, Yolo V7, QMPA, and MobileNet V3-Small models. The fundus image datasets are highly imbalanced. In addition, it contains noise and artifacts. The suggested image pre-processing technique has improved the image quality. The dataset biases were addressed using the data augmentation process. The feature extraction process applied the Yolo V7 technique to extract the key features. The author applied the QMPA with the Cauchy–Gaussian mutation strategy to select the critical features related to the DR severity. The MobileNet V3 model was employed to classify the images based on severity levels. The benchmark datasets, including APTOS and EyePacs, were used to generalize the proposed model. The findings highlight the significance of the proposed model in diagnosing DR severity. The proposed model offers an opportunity to develop a mobile-based application with which to treat DR patients. However, it encountered limitations in classifying the fundus images. The small dimension of DR severity in the fundus images may reduce the proposed model’s prediction accuracy. 

This entry is adapted from the peer-reviewed paper 10.3390/diagnostics13193120

This entry is offline, you can click here to edit this entry!
Video Production Service