Deep Learning Methods in a Moroccan Ophthalmic Center: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , , , , ,

Diabetic retinopathy (DR) remains one of the world’s frequent eye illnesses, leading to vision loss among working-aged individuals. Hemorrhages and exudates are examples of signs of DR. However, artificial intelligence (AI), particularly deep learning (DL), is poised to impact nearly every aspect of human life and gradually transform medical practice. Insight into the condition of the retina is becoming more accessible thanks to major advancements in diagnostic technology. AI approaches can be used to assess lots of morphological datasets derived from digital images in a rapid and noninvasive manner. Computer-aided diagnosis tools for automatic detection of DR early-stage signs will ease the pressure on clinicians.

  • artificial intelligence
  • deep learning
  • diabetic retinopathy
  • hemorrhages

1. Introduction

Diabetic retinopathy is the primary cause of vision loss in working aged individuals around the world. Its silent progression makes it a sight threatening condition. By 2030, the number of patients with vision-threatening diabetic retinopathy (VTDR) is predicted to jump from 37.3 million to 56.3 million [1]. On the micro scale, according to the Moroccan Society of Ophthalmology [2], the prevalence of DR in Morocco is 35%, approaching 10% of legal blindness.
Diabetic retinopathy is a small-vessel consequence of diabetes and continues to be the leading cause of preventable ocular morbidity amongst working aged individuals. Diabetes currently affects 422 million people globally, with 600 million people expected to be impacted by 2040, primarily in low- and middle-income nations [3]. It is found in one-third of diabetics and it is related to a higher risk of life-threatening systemic vascular problems such as small vessels heart disease, cardiac failure, and stroke [4]. Prevention is the key factor to reduce the risk of diabetic retinopathy evolution by keeping blood pressure, blood glucose, and blood lipids under control. Researchers distinguish many signs of DR, such as retinal hemorrhages, cottony nodules, and exudates. Diabetic retinopathy is a type of retinal microangiopathy. It entails changes in the vascular wall in addition to modifications in the rheological blood’s characteristics. Although timely laser therapy can help with both macular oedema and proliferative retinopathy, its potential to reverse vision loss is limited. Endo-ocular surgery may be required in rare cases of advanced retinopathy. Common signs and symptoms of Diabetic Retinopathy are blurred vision, sudden onset of double vision, dryness of the eyes, difficulty perceiving colors, floating bodies, and difficulty seeing in the dark.
Using different methods of measurement, several studies have found that the progression of diabetes to diabetic retinopathy (DR) is associated with changes in hemodynamics or measurable vascular geometry. Possible geometric alterations in the retina might indicate the existence of a systemic disease [5]. However, many parameters, mainly venous, showed a significant change in the development of DR, including an early change two years before the start of DR [6].
Furthermore, many studies demonstrated that the genetic polymorphism of histone ethyltransferases, which are responsible for elevated expression of key proinflammatory factors implicated in vascular injury, can be considered as predictors of the risk for micro- and macrovascular diabetic complications [7].
The Diabetes Control and Complications Trial (DCCT) specified that intensive glycaemic control in type 1 diabetes minimized the risk of development of diabetic retinopathy (primary prevention) and slowed its progression in a group with mild retinopathy at baseline (secondary prevention) [8].
Early detection of DR has demonstrated a significant decrease in the risk of vision impairment. Screening programs are conducted within the framework of a healthcare policy for blindness prevention [9]. In Morocco, very few caravans are organized to alleviate the burden on the health system. Nevertheless, these efforts are diluted by the ongoing need for diagnosis, treatment, and further monitoring.
Other human and geographic challenges are encountered linked to the shortage of trained ophthalmologists and retinal specialists (a ratio of ophthalmologists per capita is 1/68,000), as well as the presence of secluded regions with poor access to medical facilities, as well as uneven scattering between large cities and the countryside [10].

2. Application of Deep Learning Methods in a Moroccan Ophthalmic Center

Using the IDRiD dataset [11], Xu et al. [12] worked on a segmentation model called FFU-Net (Feature Fusion U-Net), which improves U-Net architecture. To begin, the network’s pooling layer is replaced with a convolutional layer to minimize the spatial loss of the retinal image. Then, they integrated multiscale feature fusion (MSFF) blocks into the encoders, one that helps the network learn multiscale features and enriches the information provided with skip connections and lower resolution decoders by fusing contextual channel attention (CCA) models. At last, the authors proposed a balanced focal loss function to address misclassification and data imbalance issues.
Kou et al. [13] proposed an enhanced residual U-Net (ERU-Net) for segmenting microaneurysms (MAs) and exudates (EXs). They evaluated ERU-NET’s performance for MAs and EXs segmentation on three public datasets: IDRiD, DDR, and E-Ophtha. On these three datasets, the used architecture achieves AUC values of 0.9956, 0.9962, 0.9801, 0.9866, 0.9679, and 0.9609 for microaneurysm and exudates segmentation, which are higher than the original U-Net values.
Li et al. [14] presented MAU-Net, which is a retinal image segmentation method based on the U-Net structure, to segment retinal blood vessels. The authors used DRIVE, STARE, and CHASEDB1 to validate their method.
Zhang et al. [15] proposed a CNN architecture that incorporated the Inception-Res module, as well as densely connected convolutional modules into the U-Net model. The author tested their model on vessel segmentation from retinal images, MRI brain neoplasm segmentation datasets from MICCAI BraTS 2017, and lung CT scan segmentation data from Kaggle datasets. The results of the lung segmentation achieved an average Dice score of 0.9857. The results for brain tumor segmentation achieved a Dice score of 0.9867. The results for vessel segmentation achieved an average Dice score of 0.9582.
Dai et al. [16] developed a DL solution called DeepDR, which allows users to detect different stages of DR. An amount of 466,247 color fundus images were used for training. The detection results of different DR signs, such as microaneurysms, cotton spots, hard exudates, and hemorrhages were 0.901, 0.941, 0.954, and 0.967, respectively. DR classification into mild, moderate, severe, and proliferative achieved an area under the curves of 0.943, 0.955, 0.960, and 0.972, respectively.
Sambya et al. [17] used a U-NET model based on a residual network with sub-pixel convolution initialized to the nearest convolution. The suggested architecture was trained and validated on two publicly accessible datasets, IDRiD and e-ophtha, for microaneurysm and hard exudate segmentation. On the IDRiD dataset, the network obtains a Dice score of 0.9998, as well as 99.88% accuracy, 99.85% sensitivity, 99.95% specificity, for microaneurysms and exudates.
Yaday et al. [18] proposed a U-Net based approach for retinal vessel segmentation. Before starting the segmentation procedure, some preprocessing approaches are used to improve the image’s impacted region. Then, a discrete double-tree Ridgelet transform (DT-DRT) is applied to the dataset to extract the features of the region of interest. The proposed segmentation achieved an accuracy of 96.01% in CHASE DB1, 97.65% in DRIVE, and 98.61% in STARE.
Toufique A. Soomro et al. [19] first used preprocessing steps to make the training process more efficient. They implemented the CNN model based on a variational autoencoder (VAE), which is a modified version of U-Net. Their main contribution to the CNN implementation is to replace all pooling layers with progressive convolution and deeper layers. The proposed model generates segmented vessels image. The authors used both DRIVE and STARE datasets to train and test their model. It gave a sensitivity of 0.739 and an accuracy of 0.948 on the DRIVE database. Additionally, a sensitivity of 0.748 and an accuracy of 0.947 are observed for the STARE database.
Swarup et al. [20] compared different architectures (UNET, TLCNN, PCNSVM, and rSVMbCNN). Then, they presented the selected retinal image segmentation method based on a Ranking Support Vector Machine (rSVM) with a convolutional neural network in deep learning for the detection of diabetic retinopathy. They started by computing the pixel-by-pixel score with rSVM, and they then designed a deep convolutional neural network for retinal image segmentation followed by automatic anomaly detection using morphological operations. As a result, they achieved a segmentation accuracy of 96.4%, 97%, and 98.2% for three different databases—STARE, DIARETDB0, and DIARETDB1.
Many AI devices have been developed to revolutionize DR screening [21]. Pal et al. [22] applied the You Only Look Once version 3 (YOLOv3) algorithm to automatically detect hemorrhages in fundus images. The YOLOv3 algorithm recognized all red spots and surrounded them with multiple boxes. It identifies bounding boxes using a CNN-based model named Darknet53 and a squared error loss function, as well as logistic regression to determine an object’s confidence score. Finally, non-max suppression was used to delete anything other than the best-fit bounding boxes. In order to train their model, the authors used the MESSIDOR dataset. It is made up of 1200 RGB Fundus images. Only 742 people were chosen out of a total of 1200, with 572 going through training and 170 going through validation. The average precision value of test data was 83.3%.
The results of a YOLO model were published by Rohan Akut [23]. It entails detecting microaneurysms and identifying their location on retinal images. He developed this algorithm based on 85,000 fundus images used for training. The dataset was divided into a ratio of 90–10%, with 90% going to training and 10% going to testing. An amount of 10% of the training dataset was used for validation. The model used enables the creation of a green bounding box around each microaneurysm.
Ming et al. [24] evaluated EyeWisdom in real world, which is an AI solution based on the YOLO detection system. It was created using 25,297 retinography (3785 from Peking Union Medical College Hospital and Henan Eye Hospital and 21,512 from the Kaggle dataset) [25]. The sensitivity was 90.4%, and the specificity was 95.2%.
Yang et al. [26] presented a collaborative learning framework for robust DR grading that integrates patch level lesion and image level grading features (CLPI). They used the IDRiD dataset as a lesion dataset, which contains 81 color fundus images (54 for training and 27 for testing) with pixel level annotations of lesions such as exudates, MAs, and hemorrhages. The authors also used image level datasets such as Messidor-1 [27], Messidor-2, LIQ-EyePACs [28], and other private datasets. They demonstrated that the proposed CLPI outperforms senior ophthalmologists, as well as SOTA algorithms. The authors demonstrated the reliability of CLPI by evaluating the DR grading methods in real world scenarios. The findings demonstrated the effectiveness of the lesion attention scheme, as well as the benefits of CLPI’s end-to-end collaborative learning.

This entry is adapted from the peer-reviewed paper 10.3390/diagnostics13101694

This entry is offline, you can click here to edit this entry!
ScholarVision Creations