AI-Powered Diagnosis of Skin Cancer: Comparison
Please note this is a comparison between Version 2 by Lindsay Dong and Version 1 by Saeed Mian Qaisar.

Skin cancer continues to remain one of the major healthcare issues across the globe. If diagnosed early, skin cancer can be treated successfully. Artificial Intelligence (AI)-based methods can assist in the early detection of skin cancer and can consequently lower its morbidity, and, in turn, alleviate the mortality rate associated with it. Machine learning and deep learning are branches of AI that deal with statistical modeling and inference, which progressively learn from data fed into them to predict desired objectives and characteristics. 

  • artificial intelligence
  • computer-aided diagnostics
  • deep learning
  • skin cancer

1. Introduction

Skin cancer is the abnormal growth of skin cells. The cancerous growth may affect both the layers—dermis and epidermis, the two types of skin cancers that can arise from the epidermis are carcinomas and melanomas, depending on their cell type—keratinocytes or melanocytes, respectively [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75][1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74][75]. It is a challenge to estimate the incidence of skin cancer due to various reasons, such as the multiple sub-types of skin cancer [76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99]. This poses as a problem while collating data, as non-melanoma is often not tracked by registries or are left incomplete because most cases are treated via surgery. As of 2020, the World Cancer Research Fund International reported a total of 300,000 cases of melanoma in skin, and a total of 1,198,073 cases of non-melanoma skin cancer [100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131][100][101][102][103][104][105][106][107][108][109][110][111][112][113][114][115][116][117][118][119][120][121][122][123][124][125][126][127][128][129][130][131]. The reasons for the occurrence of skin cancer cannot be singled out, but they include and are not limited to exposure to ultraviolet rays, family history, or a poor immune system [126]. The affected spot on the skin is called a lesion, which can be further segregated into multiple categories depending on its origin [1]. A comparison between different lesion types is usually accompanied by the presence or the absence of certain dermoscopic features.
There are three stages associated with an automated dermoscopy image analysis system, namely pre-processing, image segmentation, and feature extraction [2,4][2][4]. Segmentation plays a vital role, as the succeeding steps are dependent on this stage’s output. Segmentation can be carried out in a supervised manner by considering parameters such as shapes, sizes, and colors, coupled with skin texture and type. Melanoma development that takes place horizontally or radially along the epidermis is called “single cancer melanoma”, which carries critical importance in the early diagnosis of skin cancer [3]. Dermoscopy is a non-invasive diagnostic method which allows for a closer examination of the pigmented skin lesion. It is performed with the help of an instrument called a dermatoscope. The procedure of dermoscopy allows for a visualization of the skin structure in the epidermis that would not otherwise be possible to the naked eye. Studies [127] suggest that a growing number of practitioners are incorporating dermoscopy into their daily practices. Dermoscopy can be categorized into three modes—polarized contact, polarized non-contact, and nonpolarized contact (unpolarized dermoscopy). Polarized and nonpolarized dermoscopy are complementary, and utilizing both to acquire clinical images increases the diagnostic accuracy [128]. These images can then be processed with the help of AI methods to assist in the diagnosis of skin cancer [132,133,134][132][133][134].

2. Machine Learning and Deep Learning Models for Skin Cancer Diagnosis

2.1. Need for Machine Learning and Deep Learning Models for Skin Cancer Diagnosis

Artificial Intelligence has laid the foundation for integrating computers into the medical field seamlessly [30]. It provides an added dimension to diagnosis, prognosis, and therapy [36]. Recent studies have indicated that machine learning and deep learning models for skin cancer screening have been on the rise. This is primarily because these models, as well as other variants of Artificial Intelligence, use a concoction of algorithms, and when provided with data, accomplish tasks. In the current scenario, the tasks include, but are not limited to, the diagnosis of the patient, the prognosis of the patient, or predicting the status governing the ongoing treatment [37]. Diagnosis is the process of understanding the prevailing state of the patient, while prognosis refers to the process of predicting the future condition of the patient by extrapolating all the current parameters and their corresponding outputs. AI has now progressed to the point where it can be successfully used to detect cancer earlier than the traditional methods [6]. As early detection is key for a fruitful treatment and better outcome of skin cancer, the need for machine learning and deep learning models in the field of skin cancer is paramount.

2.2. Machine Learning Techniques

2.2.1. Artificial Neural Networks

Artificial neural networks (ANNs) are systems that draw inspiration from the animal brain. ANNs have been used to predict non-melanoma skin cancer by inputting a certain set of tried and tested parameters fit for training, such as gender, vigorous exercise habits, hypertension, asthma, age, and heart disease etc. [38] The ANN takes the entire dataset as the input. To improve the accuracy of the model, the network inputs are normalized to values between 0 and 1. The outputs are treated as typical classification outputs, which return fractional values between 0 and 1. ANNs can also be used to detect skin cancer by taking an image input and subjecting it through hidden layers [39]. This process is carried out in four sequential steps, the first of which is to initialize random weights in the ANN system. Next, each of the activation values are calculated. Consequently, the magnitude of the error is also known as the loss change. The weights are updated proportionately, with respect to the loss. Until the loss reaches a certain lower bound or a floor value, the three steps are repeated. In this field that pertains to skin cancer detection, visual inspection is the introductory stage. This is due to the similarities shared between various subcategories of tumors, such as color, area, and distribution. Owing to this reason, the use of ANNs is encouraged to enhance multi-class skin lesion detection [40]. The trained network models are used with a logistic regression model to successfully detect skin lesions while reducing the false positives and negatives in the process. The choice of activation function for the ANN is completely dependent on the user, and it is to be noted that each function carries its own sets of advantages and disadvantages with respect to the convergence of the model and the computational load [40]. ANNs have been used to simultaneously predict various symptoms that generally occur in cancer-affected patients, as seen in [41]. The risk of symptoms predicted were that of pain, depression, and poor well-being. The input to the ANN was a list of 39 distinct covariates. The input features can be classified into five subcategories, such as demographic characteristics such as age and sex, clinical characteristics such as the cancer type and stage, treatment characteristics such as the radiation treatment and cancer surgery, baseline patient reported measures such as the performance status and symptom burden status, and finally, health care utilization measures such as whether the patient has been hospitalized or if they have a live-in caregiver. ANNs play an important role in predicting skin cancer and the presence of a tumor, due to their flexible structure and data-driven nature, owing to which they are considered as a potential modeling approach [42].

2.2.2. Naïve Bayes

Naïve Bayes classifiers are probabilistic classifiers that work by employing the use of Bayes’ theorem. Naïve Bayes classifiers have been used in the field of skin cancer to classify clinical and dermatological images with high precision [43]. The model has reached an accuracy of 70.15%, as it makes use of important pieces of data to develop a strong judgement and assists physicians in the diagnosis and precise detection of the disease. Naïve Bayes classifiers extend their applications by providing a means to detect and segment skin diseases [44]. For each output class of the classifier, a posterior probability distribution is obtained. This process is performed iteratively, which implies that the method requires lesser computational resources, as it avoids the need for multiple training sessions. The Bayesian approach has also been used to probabilistically predict the nature of a data point to a high degree of accuracy, as seen in [45]. The final classification made in this case combines the existing knowledge of data points to use in the Bayesian analysis. The Bayesian sequential framework has also been put into use to aid models that help to detect a melanoma invasion into human skin. A total of three model parameters were estimated with the help of the model, namely, the melanoma cell proliferation rate, the melanoma cell diffusivity, and ultimately, a constant that determines the degradation rate of melanoma cells in the skin tissue. The algorithm learns data through the following, in a sequential manner: a spatially uniform cell assay, a 2D circular barrier assay, and finally, a 3D invasion assay. This Bayesian framework can be extracted and used in other biological contexts due to its versatile nature. This is chiefly possible in situations where detailed quantitative biological measurements, such as skin lesion extraction from scientific images, is not easy; hence, the extraction method must incorporate simple measurements from the images provided, like the Bayesian framework does [46].

2.2.3. Decision Tree

Decision trees are a supervised learning method which are primarily used for classification problems and are occasionally extended to fit regression problem statements as well. Decision trees have been used to provide an intuitive algorithm that helps quantify the long-term risk of non-melanoma skin cancer after a liver transplant. This is done by utilizing the variables closely associated with the peri-transplant period [47]. The classifier is used as a view for the patients which provides personalized solutions such as chemoprophylaxis. A slight variation of decision trees can also be employed, as seen in [48]. The article proposes a random decision tree algorithm to detect breast surgery infection. The risk factors that came along with the algorithm in this case were obesity, diabetes, and kidney failure, etc. While the study investigates breast cancer, skin cancer is most closely associated with breast cancer due to the presence of the dangerous melanoma type. Decision trees showcase its versatility in the way it is used. In [49], decision trees are used as a mode for the visual representation of problem by dividing each branch into the different outcomes possible during a clinical procedure. The decision tree model was used to gauge the cost effectiveness of the sentinel lymph node biopsy, a new standard technique used in the treatment of melanoma and breast cancer. The cost effectiveness was measured with respect to head and neck cutaneous squamous cell carcinoma, a subsection of skin cancer. The decision tree presented outputs to determine whether the treatment was cost effective for a particular set of tumors, or if it could be used generally. Decision trees can also be used as an intermediate layer instead of keeping them as a standalone classifier.

2.2.4. K-Nearest Neighbors

The k-nearest neighbors algorithm, also referred to as the KNN, is a parametric supervised classification algorithm that uses distance and proximity as metrics to classify the data points. KNNs were used as an evaluation algorithm to detect skin cancer and melanomas. The KNN model was then used to produce a confusion matrix which helps with visualizing the accuracy of the entire model [52]. Apart from this case of use, KNNs have also been used extensively by extending the model as per requirement. In [53], they extend KNN to use the Radius Nearest Neighbors classifier to classify breast cancer and calculate the evaluation metrics such as accuracy and specificity. The reason for augmenting the KNN solely lay in the limitations posed by an extreme value of k. For a small k, the KNN classifier is highly sensitive to outliers, and for a large value of k, the classifier underfits on the training data points. This problem is overcome by normalizing the radius value of each point to recognize outliers effectively. The applications of KNNs have been expanded by using them for detecting the anomalous growth of skin lesions [54]. KNNs are hybridized with Firefly to provide quantitative information about a skin lesion without having to perform any unnecessary skin biopsies. The hybrid classifier built upon KNN is used to predict and classify using two primary methods: threshold-based segmentation and ABCD feature extraction. The Firefly optimization coupled with KNN helps to recognize skin cancer much more effectively than its predecessors, while keeping computational and temporal complexity to a minimum.

2.2.5. K-Means Clustering

K-means clustering is a clustering method that is grouped under unsupervised learning. By employing a fuzzy logic with the existing k-means clustering algorithm, studies have been conducted on segmenting the skin melanoma at its earliest stage [56]. Fuzzy k-means clustering is applied to the pre-processed clinical images to delineate the affected regions. This aids the process to subsequently be used in melanoma disease recognition. K-means clustering has widespread cases of use and can be used to segment skin lesions, as seen in [57]. The algorithm groups objects, thereby ensuring that the variance within each group is at minimum. This enables the classifier to return high-feature segmented images. Each image pixel is assigned a randomly initialized class center. The centers are recalculated based on every data point added. The process is repeated until all the data points have been assigned clusters. Unlike a binary classifier like k-means, where each data point can belong to only one cluster, fuzzy c-means clustering enables the data points to be a part of any number of clusters, with a likelihood attached to hit. The fuzzy c-means algorithm outputs comparatively better results in comparison with the legacy k-means clustering algorithm. Fuzzy c-means provide a probability for data points that depends on the distance between the cluster center and the point itself. In [58], fuzzy c-means were used in place of the k-means algorithm to detect skin cancer, inspired by a differential evolution artificial neural network. The simulated results indicated that the proposed method outperformed traditional approaches in this regard. The k-means algorithm can also be used as an intermediate layer to produce outputs, as trained on by deep learning methods.

2.2.6. Random Forest

Random forests are an extension of decision trees. They are an ensemble learning method commonly used for classification problems. Random forests extend their applications to detect skin cancer and classify skin lesions, as done in [61]. Random forests permit the evaluation of sampling allocation. The steps followed in the proposed method are to initialize a training set. The training set is then bootstrapped to generate multiple sub-training sets. By calculating the Gini index for each of the sub-training sets, the model is then populated with decision values. The individual decision values are then combined to generate a model that classifies by voting on the test samples. Skin cancer can also be classified by characterizing the Mueller matrix elements using the random forest algorithm [62]. The random forest algorithm builds various sub-decision trees as the foundation for classification and categorization tasks. Every individual decision tree is provided with a unique logic that constitutes the binary question framework used in the entirety of the system. In comparison with the original decision tree, the random forest provides enhanced results while reducing the variance bias. This helps to prevent the overfitting of the data, which was otherwise seen in decision trees. Other studies in the classification of skin cancer involve classifying the dermoscopic images into seven sub-types. This has been implemented with the help of random forests [63].

2.2.7. Support Vector Machine

Support vector machines (SVMs) are supervised learning models that help classify, predict, and extrapolate data by analyzing them. SVMs can be used to classify different types of skin lesions. In [65], ABCD features are used for extracting the characteristic features like shape, color, and size from the clinical images provided. After selecting the features, the skin lesion is classified with the help of SVMs into melanoma, seborrheic keratosis, and lupus erythematosus. This method of using ABCD along with SVM generates great results while delivering significant information. For a narrower classification, SVMs have also been used to classify skin lesions as melanoma or non-melanoma [66]. The process was divided into six phases: acquiring the image, pre-processing the image, segmentation, extracting the features, classifying the image, and viewing the result. From the experiment, the features extracted were texture, color, and shape. To extend the nature of the above model, SVMs have also been employed to identify and detect carcinoma or infection in the early stages before it aggravates [67]. The chief difference in the extension and itself lies in the feature extraction procedure. In [67], they pre-process the input image by employing grey scale conversion and then chaining the resultant image with noise removal and binarization subprocesses. The region of interest is removed in segmentation to help with accurate classification. Similarly, for the early detection and diagnosis of skin cancer, a bag-of-features method was used, which included spatial information. The SVM was developed with the help of a histogram of an oriented gradient optimized set. This resulted in encouraging results when compared to state-of-the-art algorithms [68]. By using Bendlet Transform (BT) as features of the SVM classifier, unwanted features such as hair and noise are discarded. These are removed using the preliminary step of median filtering. BT outperforms representation systems such as wavelets, curvelets, and contourlets, as it can classify singularities in images much more precisely [69].

2.2.8. Ensemble Learning

Ensemble learning is a machine learning model that combines the predictions of two or more models. The constituent models are also called ensemble members. These models can be trained on the same dataset or can be suited to something completely different. The ensemble members are grouped together to output a prediction for the problem statement. Ensemble classifiers have been used for diagnosing melanoma as malignant or benign [70]. The ensemble members for the same are trained individually on balanced subspaces, thereby reducing the redundant predictors. The remaining classifiers are grouped using a neural network fuser. The presented ensemble classifier model returns statistically better results than other individual dedicated classifier models. Furthermore, ensemble learning has also been used in the multi-class classification of skin lesions to assist clinicians in early detection [71]. The ensemble model made use of five deep neural network models: ResNeXt, SeResNeXt, ResNet, Xception, and DenseNet. Collectively, the ensemble model performed better than all of them individually.

2.3. Deep Learning Techniques

2.3.1. Recurrent Neural Network

A recurrent neural network (RNN) is categorized as a subdivision of artificial neural networks. RNNs have been used in the detection of melanoma skin cancer [72]. The classification phase of the proposed model employs deep learning techniques by combining the optimization notion into an RNN. The existing region growing algorithm and RNN have been improved by using them alongside the modified deer hunting optimization algorithm (DHOA). Apart from standalone models, RNNs have also been used in ensemble models alongside convolution neural networks, as seen in [73], to classify skin diseases. Predecessor models were unable to use the long-term dependence connection between key image features and image classes. This served as the motivation for the proposed model. Deep features are extracted from the clinical images, after which the features are fed into the dual bidirectional long short-term memory network to learn the features. Ultimately, a SoftMax activation function is used to classify the images. Similarly, ensemble models can also be used to automate the detection of mammogram breast cancer [74].

2.3.2. Deep Autoencoder

Deep autoencoders are neural networks that are trained to emulate the input as the output. They consist of two symmetrical deep belief networks. In the field of skin cancer, deep autoencoders have been used for reconstructing the dataset, which is then used to detect melanocytes by employing spiked neural networks [76]. The structure of the autoencoder model consists of three main layers: the input layer, hidden layers, and the output layer. The model is run on the foundational principle that every feature is not independent of each other, otherwise it would compromise the efficiency of the model. Autoencoders have also been used to recognize and detect melanoma skin disease [77]. The various autoencoders used were Deeplabv3+, Inception-ResNet-v2-unet, mobilenetv2_unet, Resnet50_unet, and vgg19_unet. Quantitative evaluation metrics showed that the Deeplabv3+ was a significant upgrade from the other architectures used in the study to detect melanoma skin. Skin cancer detection has also been carried out with the help of custom algorithms involving autoencoders, such as the social bat optimization algorithm [78]. The detection process takes place in three steps. Firstly, the clinical images are pre-processed to remove the noise and artefacts present. The pre-processed images are then fed to the feature extraction stage through a convolution neural network and a local pixel pattern-based texture feature. Right after this stage, the classification is completed using a deep stacked autoencoder, much like the evaluation in [77,79][77][79] of different autoencoders for skin lesion detection. The five architectures evaluated in this study are u-net, resu-net, vgg16unet, desnenet121, and efficientnetb0. Among the evaluated architectures, the densenet121 architecture showed the highest accuracy.

2.3.3. Long Short-Term Memory

Long short-term memory, or LSTM, is an artificial neural network that uses feedback connections to enable the processing of not only single data points, but also sequential data. LSTM has helped in classifying skin diseases by efficiently maintaining stateful information for accurate predictions [80]. The robustness of the proposed algorithm helps to recognize target regions faster, while using almost half the number of computations compared to predecessor algorithms. The use of LSTM further bolsters the accuracy of prediction due to its previous timestamp retention properties. Other than plain recognition, LSTMs can also be used to predict cancer and tumors in irregular medical data [81]. This is made possible by the enhanced overall performance of LSTMs in screening time series data. The risk groups being dealt with in the proposed study correlated well to the temporal cancer data (time to cancer diagnosis). Skin disease classification models have been designed using deep learning approaches like LSTM with the assistance of hybrid optimization algorithms such as the Hybrid Squirrel Butterfly Search Optimization algorithm (HSBSO) [82]. The modified LSTM is developed by implementing the HSBSO and the optimized parameters of an LSTM model to maximize the classification accuracy. LSTMs help in improving the overall efficiency of the proposed skin disease classification model. Deep learning models are not only limited to the clinical images of tumors.

2.3.4. Deep Neural Network

Deep neural networks are those neural networks that expand to a certain level of complexity and depth. Vaguely, the certain level is decided to be two or more layers. Deep nets have been used to estimate the uncertainty lying in skin cancer detection [84]. The motivation behind the model lies in the ineptness of publicly available skin cancer detection software for providing confident estimates of the predictions. The study proposes the Deep Uncertainty Estimation for Skin Cancer (DUNEScan) that provides an in-depth and intuitive analysis of the uncertainty involved in each prediction. Deep nets have also been used to classify skin cancer at a dermatological level [85]. The classification of skin lesions, with the help of images alone, is an arduous task due to the minute variations in the visual appearance of lesions. Deep nets show immense potential for varied tasks that constitute multiple fine subcategories. The performance of the model is evaluated using biopsy-proven clinical images that were classified into two binary classification problems: keratinocyte carcinomas and benign seborrheic keratoses, and malignant melanomas and benign nevi. The deep net model achieves a performance that matches and, in some cases, outperforms all the experts associated with the evaluation program.

2.3.5. Deep Belief Network

Deep belief networks (DBN) are generative graphical models that are composed of multiple layers of latent variables. DBNs have been used for cancer prediction, as can be seen in [88]. They perform the model training in two steps. Firstly, each layer is separately trained in an unsupervised manner. This is done to retain the maximum feature information. Subsequently, the output features are taken and used to train the entity relationship classifier in a supervised manner. DBNs have been designed to automatically detect regions of breast mass and diagnose them as benign, malignant, or neither [89]. The proposed DBN performs comparatively better than its conventional counterparts. This is because the conventional approaches depend on the output of selection feature algorithms. On the contrary, all the features were directly used without any reduction in their dimensions for the DBN model. To improve the diagnosis of skin melanoma by using DBNs in place of the traditional approach, dermoscopy has been studied [90]. The deep belief learning network architecture disperses the weights and hyperparameters to every position in the clinical image. By doing so, this makes it possible to scale the algorithm to varying sizes. The images are first use a Gaussian filter to remove the high and low intensities from the images. Subsequently, the pre-processed images are segmented using the k-means algorithm. The resultant images are then classified as per the output format of the proposed DBN.

2.3.6. Deep Convolutional Neural Network

Convolutional neural networks (CNNs) are artificial neural networks that are primarily used in image processing and recognition. Deep convolutional neural networks have been implemented to classify skin cancer into four different categories: basal cell carcinoma, squamous cell carcinoma, actinic keratosis, and melanoma [91]. The methodology involves two methods, an error-correcting output codes simple vector machine (ECOC SVM) classifier, and a deep CNN. The scholars use accuracy, sensitivity, and specificity as evaluation parameters. A slight variation from the previous method introduces a LeNet-5 architecture along with a deep CNN to classify the image data [92]. The model aids the diagnosis of melanoma cancer. The experiment results indicate that training data and number of epochs for training are integral to the process of the detection and diagnosis of melanoma cancer. Results suggest that training the model for over 100 epochs may lead to overfitting while training it for below 100 epochs leads to underfitting. In addition, there are several parameters which account for the accuracy of the results, such as the learning rate, number of layers, and dimensions of the input image. Since dermatologists use patient data along with deep CNNs for an increased diagnostic accuracy, recent studies have investigated the influence of integrating image feature data into the deep CNN model [93]. The commonly used patient data were sex, age, and lesion location. To accommodate the patient data, one-hot encoding was performed. The key differentiator between fusing the image features was the complexity associated with each classification, respectively. The studies indicate the potential benefits and advantages of amalgamating patient data into a deep CNN algorithm. Region-based CNNs have been employed to detect keratinocytic skin cancer on the face [94]. The algorithm aims to automatically locate the affected and suspected areas by returning a probabilistic value of a malignant lesion. The deep CNN was trained on over one million image crops to help locate and diagnose cancer. While the algorithm demonstrated great potential, certain pitfalls were highlighted: skin markings were mistaken as lesions by the deep CNN model. Secondly, the testing data usually made use of the physician’s evaluation data, rather than the clinical photographs alone, which ultimately led to the need for a multimodal approach. The developments of recent studies have enabled newly designed models to outperform expert dermatologists and contemporary deep learning methods in the field of multi-class skin cancer classification, using deep CNNs [95]. The model was fine-tuned over seven classes in the HAM10000 dataset. While ensemble models increase the accuracy for classification problems, they do not have a major role in refining the performance of the finely-tuned hyperparameter setup for deep CNNs.

2.3.7. Deep Boltzmann Machine

Deep Boltzmann machines (DBM) are probabilistic, unsupervised, and generative models that possess undirected connections between multiple layers within the model. Multi-modal DBMs have been proposed to monitor and diagnose cancer before the mortality rate rises [96]. The multi-modal DBM learns the correlation between an instance’s genetic structure. The testing and evaluation phase use the same to predict the genes that are cancer-causing mutations specific to the specimen. By combining restricted Boltzmann machines (RBM) and a skin lesion classification model through optimal segmentation, the OS-RBM model helps to detect and classify the presence of skin lesions in clinical images [97]. The OS-RBM model carries out certain steps sequentially: image acquisition, pre-processing using Gaussian filters, segmenting the pre-processed images, extracting the features, and classifying the images. Segmenting images is executed through the Artificial Bee Colony algorithm.

2.3.8. Deep Reinforcement Learning

Reinforcement learning (RL) is a training method often associated with rewarding and punishing the desired and undesired behaviors, respectively. Reinforcement learning algorithms have been incorporated into the medical scene to automatically detect skin lesions [98]. This is done by initially proceeding from coarse segmentation to sharp and fine results. The model is trained on the popular ISIC 2017 dataset and HAM10000 dataset. The regions are initially delineated. By tuning the hyperparameters appropriately, the segmentation accuracy is also boosted. As deep RL methods have the capability to detect and segment small irregular shapes, the potential for deep RLs in the medical background is immense.

2.3.9. Extreme Learning Machine

Extreme learning machines (ELM) are essentially feedforward neural networks. While they provide a good generalization performance, the major difference arises in the learning speed. ELM models have been proposed to tackle the existing problem of skin cancer detection [99]. This detection takes place by differentiating between benign and malignant lesions. Upon pre-processing the clinical images, the regions are segmented using the Otsu method. The model optimizes and learns with the help of a deep belief network which introduces a Thermal Exchange Optimization algorithm. Using hybrid pretrained models along with ELMs for diagnosing skin cancer has also been researched [100].

References

  1. Murugan, A.; Nair, S.A.H.; Preethi, A.A.P.; Kumar, K.P.S. Diagnosis of skin cancer using machine learning techniques. Microprocess. Microsyst. 2020, 81, 103727.
  2. Vijayalakshmi, M.M. Melanoma skin cancer detection using image processing and machine learning. Int. J. Trend Sci. Res. Dev. 2019, 3, 780–784.
  3. Ozkan, I.A.; Koklu, M. Skin lesion classification using machine learning algorithms. Int. J.-Telligent Syst. Appl. Eng. 2017, 5, 285–289.
  4. Monika, M.K.; Vignesh, N.A.; Kumari, C.U.; Kumar, M.; Lydia, E.L. Skin cancer detection and classification using machine learning. Mater. Today Proc. 2020, 33, 4266–4270.
  5. Nahata, H.; Singh, S.P. Deep learning solutions for skin cancer detection and diagnosis. In Machine Learning with Health Care Perspective; Springer: Cham, Switzerland, 2020; pp. 159–182.
  6. Das, K.; Cockerell, C.J.; Patil, A.; Pietkiewicz, P.; Giulini, M.; Grabbe, S.; Goldust, M. Machine Learning and Its Application in Skin Cancer. Int. J. Environ. Res. Public Health 2021, 18, 13409.
  7. Tufail, A.B.; Ma, Y.-K.; Kaabar, M.K.A.; Martínez, F.; Junejo, A.R.; Ullah, I.; Khan, R. Deep learning in cancer diagnosis and prognosis prediction: A minireview on challenges, recent trends, and future directions. Comput. Math. Methods Med. 2021, 2021, 9025470.
  8. Munir, K.; Elahi, H.; Ayub, A.; Frezza, F.; Rizzi, A. Cancer Diagnosis Using Deep Learning: A Bibliographic Review. Cancers 2019, 11, 1235.
  9. Goyal, M.; Knackstedt, T.; Yan, S.; Hassanpour, S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities. Comput. Biol. Med. 2020, 127, 104065.
  10. Li, H.; Pan, Y.; Zhao, J.; Zhang, L. Skin disease diagnosis with deep learning: A review. Neurocomputing 2021, 464, 364–393.
  11. Shastry, K.A.; Sanjay, H.A. Cancer diagnosis using artificial intelligence: A review. Artif. Intell. Rev. 2021, 55, 2641–2673.
  12. Painuli, D.; Bhardwaj, S.; Köse, U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput. Biol. Med. 2022, 146, 105580.
  13. Naeem, A.; Farooq, M.S.; Khelifi, A.; Abid, A. Malignant Melanoma Classification Using Deep Learning: Datasets, Performance Measurements, Challenges and Opportunities. IEEE Access 2020, 8, 110575–110597.
  14. Haggenmüller, S.; Maron, R.C.; Hekler, A.; Utikal, J.S.; Barata, C.; Barnhill, R.L.; Beltraminelli, H.; Berking, C.; Betz-Stablein, B.; Blum, A.; et al. Skin cancer classification via convolutional neural networks: Systematic review of studies involving human experts. Eur. J. Cancer 2021, 156, 202–216.
  15. Adegun, A.; Viriri, S. Deep learning techniques for skin lesion analysis and melanoma cancer detection: A survey of state-of-the-art. Artif. Intell. Rev. 2020, 54, 811–841.
  16. Saba, T. Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges. J. Infect. Public Health 2020, 13, 1274–1289.
  17. Usama, M.; Naeem, M.A.; Mirza, F. Multi-Class Skin Lesions Classification Using Deep Features. Sensors 2022, 22, 8311.
  18. Bratchenko, I.A.; Bratchenko, L.A.; Khristoforova, Y.A.; Moryatov, A.A.; Kozlov, S.V.; Zakharov, V.P. Classification of skin cancer using convolutional neural networks analysis of Raman spectra. Comput. Methods Programs Biomed. 2022, 219, 106755.
  19. Brinker, T.J.; Hekler, A.; Utikal, J.S.; Grabe, N.; Schadendorf, D.; Klode, J.; Berking, C.; Steeb, T.; Enk, A.H.; von Kalle, C. Skin Cancer Classification Using Convolutional Neural Networks: Systematic Review. J. Med. Internet Res. 2018, 20, e11936.
  20. Bakos, R.M.; Blumetti, T.P.; Roldán-Marín, R.; Salerni, G. Noninvasive Imaging Tools in the Diagnosis and Treatment of Skin Cancers. Am. J. Clin. Dermatol. 2018, 19, 3–14.
  21. Wakelin, S.H. Benign skin lesions. Medicine 2021, 49, 443–446.
  22. Fujisawa, Y.; Otomo, Y.; Ogata, Y.; Nakamura, Y.; Fujita, R.; Ishitsuka, Y.; Fujimoto, M. Deep-learning-based, computer-aided classifier developed with a small dataset of clinical images surpasses board-certified dermatologists in skin tumor diagnosis. Br. J. Dermatol. 2019, 180, 373–381.
  23. Papageorgiou, V.; Apalla, Z.; Sotiriou, E.; Papageorgiou, C.; Lazaridou, E.; Vakirlis, S.; Ioannides, D.; Lallas, A. The limitations of dermoscopy: False-positive and false-negative tumors. J. Eur. Acad. Dermatol. Venereol. 2018, 32, 879–888.
  24. Catalano, O.; Roldán, F.A.; Varelli, C.; Bard, R.; Corvino, A.; Wortsman, X. Skin cancer: Findings and role of high-resolution ultrasound. J. Ultrasound 2019, 22, 423–431.
  25. Jinnai, S.; Yamazaki, N.; Hirano, Y.; Sugawara, Y.; Ohe, Y.; Hamamoto, R. The development of a skin cancer classi-fication system for pigmented skin lesions using deep learning. Biomolecules 2020, 10, 1123.
  26. Ghazal, T.M.; Hussain, S.; Khan, M.F.; Said, R.A.T.; Ahmad, M. Detection of Benign and Malignant Tumors in Skin Empowered with Transfer Learning. Comput. Intell. Neurosci. 2022, 2022, 4826892.
  27. Giavina-Bianchi, M.; Cordioli, E.; Dos Santos, A.P. Accuracy of Deep Neural Network in Triaging Common Skin Diseases of Primary Care Attention. Front Med. 2021, 8, 670300.
  28. Korhonen, N.; Ylitalo, L.; Luukkaala, T.; Itkonen, J.; Häihälä, H.; Jernman, J.; Snellman, E.; Palve, J. Premalignant lesions, basal cell carcinoma and melanoma in patients with cutaneous squamous cell carcinoma. Arch. Dermatol. Res. 2020, 313, 879–884.
  29. Nauta, M.; Walsh, R.; Dubowski, A.; Seifert, C. Uncovering and Correcting Shortcut Learning in Machine Learning Models for Skin Cancer Diagnosis. Diagnostics 2021, 12, 40.
  30. Chan, S.; Reddy, V.; Myers, B.; Thibodeaux, Q.; Brownstone, N.; Liao, W. Machine Learning in Dermatology: Current Applications, Opportunities, and Limitations. Dermatol. Ther. 2020, 10, 365–386.
  31. Zhang, N.; Cai, Y.-X.; Wang, Y.-Y.; Tian, Y.-T.; Wang, X.-L.; Badami, B. Skin cancer diagnosis based on optimized convolutional neural network. Artif. Intell. Med. 2020, 102, 101756.
  32. Hekler, A.; Utikal, J.S.; Enk, A.H.; Hauschild, A.; Weichenthal, M.; Maron, R.C.; Berking, C.; Haferkamp, S.; Klode, J.; Schadendorf, D.; et al. Superior skin cancer classification by the combination of human and artificial intelligence. Eur. J. Cancer 2019, 120, 114–121.
  33. Wen, D.; Khan, S.M.; Xu, A.J.; Ibrahim, H.; Smith, L.; Caballero, J.; Zepeda, L.; Perez, C.D.B.; Denniston, A.K.; Liu, X.; et al. Characteristics of publicly available skin cancer image datasets: A systematic review. Lancet Digit. Health 2021, 4, e64–e74.
  34. Veta, M.; Heng, Y.J.; Stathonikos, N.; Bejnordi, B.E.; Beca, F.; Wollmann, T.; Rohr, K.; Shah, M.A.; Wang, D.; Rousson, M.; et al. Predicting breast tumor proliferation from whole-slide images: The TUPAC16 challenge. Med. Image Anal. 2019, 54, 111–121.
  35. Han, S.S.; Kim, M.S.; Lim, W.; Park, G.H.; Park, I.; Chang, S.E. Classification of the Clinical Images for Benign and Malignant Cutaneous Tumors Using a Deep Learning Algorithm. J. Investig. Dermatol. 2018, 138, 1529–1538.
  36. Sharma, A.N.; Shwe, S.; Mesinkovska, N.A. Current state of machine learning for non-melanoma skin cancer. Arch. Dermatol. Res. 2022, 314, 325–327.
  37. Murphree, D.H.; Puri, P.; Shamim, H.; Bezalel, S.A.; Drage, L.A.; Wang, M.; Comfere, N. Deep learning for dermatologists: Part I. Fundamental concepts. J. Am. Acad. Dermatol. 2020, 87, 1343–1351.
  38. Roffman, D.; Hart, G.; Girardi, M.; Ko, C.J.; Deng, J. Predicting non-melanoma skin cancer via a multi-parameterized artificial neural network. Sci. Rep. 2018, 8, 1–7.
  39. Sugiarti, S.; Yuhandri, Y.; Na’am, J.; Indra, D.; Santony, J. An artificial neural network approach for detecting skin cancer. Telecommun. Comput. Electron. Control. 2019, 17, 788–793.
  40. Lopez-Leyva, J.A.; Guerra-Rosas, E.; Alvarez-Borrego, J. Multi-Class Diagnosis of Skin Lesions Using the Fourier Spectral Information of Images on Additive Color Model by Artificial Neural Network. IEEE Access 2021, 9, 35207–35216.
  41. Xuyi, W.; Seow, H.; Sutradhar, R. Artificial neural networks for simultaneously predicting the risk of multiple co-occurring symptoms among patients with cancer. Cancer Med. 2020, 10, 989–998.
  42. Sutradhar, R.; Barbera, L. Comparing an Artificial Neural Network to Logistic Regression for Predicting ED Visit Risk Among Patients with Cancer: A Population-Based Cohort Study. J. Pain Symptom Manag. 2020, 60, 1–9.
  43. Alwan, O.F. Skin cancer images classification using naïve bayes. Emergent J. Educ. Discov. Lifelong Learn. 2022, 3, 19–29.
  44. Balaji, V.R.; Suganthi, S.T.; Rajadevi, R.; Kumar, V.K.; Balaji, B.S.; Pandiyan, S. Skin disease detection and seg-mentation using dynamic graph cut algorithm and classification through Naive Bayes classifier. Measurement 2020, 163, 107922.
  45. Mobiny, A.; Singh, A.; Van Nguyen, H. Risk-Aware Machine Learning Classifier for Skin Lesion Diagnosis. J. Clin. Med. 2019, 8, 1241.
  46. Browning, A.P.; Haridas, P.; Simpson, M.J. A Bayesian Sequential Learning Framework to Parameterise Continuum Models of Melanoma Invasion into Human Skin. Bull. Math. Biol. 2018, 81, 676–698.
  47. Tanaka, T.; Voigt, M.D. Decision tree analysis to stratify risk of de novo non-melanoma skin cancer following liver transplantation. J. Cancer Res. Clin. Oncol. 2018, 144, 607–615.
  48. Sun, J.; Huang, Y. Computer aided intelligent medical system and nursing of breast surgery infection. Microprocess. Microsyst. 2020, 81, 103769.
  49. Quinn, P.L.; Oliver, J.B.; Mahmoud, O.M.; Chokshi, R.J. Cost-Effectiveness of Sentinel Lymph Node Biopsy for Head and Neck Cutaneous Squamous Cell Carcinoma. J. Surg. Res. 2019, 241, 15–23.
  50. Saba, T.; Khan, M.A.; Rehman, A.; Marie-Sainte, S.L. Region Extraction and Classification of Skin Cancer: A Het-erogeneous framework of Deep CNN Features Fusion and Reduction. J. Med. Syst. 2019, 43, 289.
  51. Ghiasi, M.M.; Zendehboudi, S. Application of decision tree-based ensemble learning in the classification of breast cancer. Comput. Biol. Med. 2021, 128, 104089.
  52. Alkhushayni, S.; Al-Zaleq, D.; Andradi, L.; Flynn, P. The Application of Differing Machine Learning Algorithms and Their Related Performance in Detecting Skin Cancers and Melanomas. J. Ski. Cancer 2022, 2022, 2839162.
  53. Ak, M.F. A Comparative Analysis of Breast Cancer Detection and Diagnosis Using Data Visualization and Machine Learning Applications. Healthcare 2020, 8, 111.
  54. Sivaraj, S.; Malmathanraj, R.; Palanisamy, P. Detecting anomalous growth of skin lesion using threshold-based segmentation algorithm and Fuzzy K-Nearest Neighbor classifier. J. Cancer Res. Ther. 2020, 16, 40–52.
  55. Oukil, S.; Kasmi, R.; Mokrani, K.; García-Zapirain, B. Automatic segmentation and melanoma detection based on color and texture features in dermoscopic images. Ski. Res. Technol. 2021, 28, 203–211.
  56. Nawaz, M.; Mehmood, Z.; Nazir, T.; Naqvi, R.A.; Rehman, A.; Iqbal, M.; Saba, T. Skin cancer detection from der-moscopic images using deep learning and fuzzy k-means clustering. Microsc. Res. Tech. 2022, 85, 339–351.
  57. Anas, M.; Gupta, K.; Ahmad, S. Skin cancer classification using K-means clustering. Int. J. Tech. Res. Appl. 2017, 5, 62–65.
  58. Hossain, M.S.; Muhammad, G.; Alhamid, M.F.; Song, B.; Al-Mutib, K. Audio-Visual Emotion Recognition Using Big Data Towards 5G. Mob. Networks Appl. 2016, 21, 753–763.
  59. Khan, M.Q.; Hussain, A.; Rehman, S.U.; Khan, U.; Maqsood, M.; Mehmood, K.; Khan, M.A. Classification of Melanoma and Nevus in Digital Images for Diagnosis of Skin Cancer; IEEE: Washington, DC, USA, 2019; Volume 7, pp. 90132–90144.
  60. Janney, B.; Roslin, E. Analysis of Skin Cancer using K-Means Clustering and Hybrid Classification Model. Indian J. Public Health Res. Dev. 2019, 10, 1371–1378.
  61. Murugan, A.; Nair, S.H.; Kumar, K.P.S. Detection of Skin Cancer Using SVM, Random Forest and kNN Classifiers. J. Med. Syst. 2019, 43, 269.
  62. Luu, N.T.; Le, T.-H.; Phan, Q.-H.; Pham, T.-T. Characterization of Mueller matrix elements for classifying human skin cancer utilizing random forest algorithm. J. Biomed. Opt. 2021, 26, 075001.
  63. Nandhini, S.; Sofiyan, M.A.; Kumar, S.; Afridi, A. Skin cancer classification using random forest. Int. J. Manag. Humanit. 2019, 4, 39–42.
  64. Dhivyaa, C.R.; Sangeetha, K.; Balamurugan, M.; Amaran, S.; Vetriselvi, T.; Johnpaul, P. Skin lesion classification using decision trees and random forest algorithms. J. Ambient. Intell. Humaniz. Comput. 2020, 1–13.
  65. Melbin, K.; Raj, Y. Integration of modified ABCD features and support vector machine for skin lesion types classi-fication. Multimed. Tools Appl. 2021, 80, 8909–8929.
  66. Alsaeed, A.A.D. On the development of a skin cancer computer aided diagnosis system using support vector machine. Biosci. Biotechnol. Res. Commun. 2019, 12, 297–308.
  67. Neela, A.G. Implementation of support vector machine for identification of skin cancer. Int. J. Eng. Manuf. 2019, 9, 42–52.
  68. Arora, G.; Dubey, A.K.; Jaffery, Z.A.; Rocha, A. Bag of feature and support vector machine based early diagnosis of skin cancer. Neural Comput. Appl. 2020, 34, 8385–8392.
  69. Poovizhi, S.; Tr, G.B. An Efficient Skin Cancer Diagnostic System Using Bendlet Transform and Support Vector Machine. An. Acad. Bras. Ciências 2020, 92.
  70. Schaefer, G.; Krawczyk, B.; Celebi, M.E.; Iyatomi, H. An ensemble classification approach for melanoma diagnosis. Memetic Comput. 2014, 6, 233–240.
  71. Rahman, Z.; Hossain, M.S.; Islam, M.R.; Hasan, M.M.; Hridhee, R.A. An approach for multiclass skin lesion clas-sification based on ensemble learning. Inform. Med. Unlocked 2021, 25, 100659.
  72. Divya, D.; Ganeshbabu, T.R. Fitness adaptive deer hunting-based region growing and recurrent neural network for melanoma skin cancer detection. Int. J. Imaging Syst. Technol. 2020, 30, 731–752.
  73. Ahmad, B.; Usama, M.; Ahmad, T.; Khatoon, S.; Alam, C.M. An ensemble model of convolution and recurrent neural network for skin disease classification. Int. J. Imaging Syst. Technol. 2021, 32, 218–229.
  74. Patil, R.S.; Biradar, N. Automated mammogram breast cancer detection using the optimized combination of con-volutional and recurrent neural network. Evol. Intell. 2021, 14, 1459–1474.
  75. Alom, M.Z.; Aspiras, T.; Taha, T.M.; Asari, V.K. Skin cancer segmentation and classification with NABLA-N and inception recurrent residual convolutional networks. arXiv 2019, arXiv:1904.11126.
  76. Toğaçar, M.; Cömert, Z.; Ergen, B. Intelligent skin cancer detection applying autoencoder, MobileNetV2 and spiking neural networks. Chaos Solitons Fractals 2021, 144, 110714.
  77. Diame, Z.E.; ElBery, M.; Salem MA, M.; Roushdy, M.I. Experimental Comparative Study on Autoencoder Per-formance for Aided Melanoma Skin Disease Recognition. Int. J. Intell. Comput. Inf. Sci. 2022, 22, 88–97.
  78. Majji, R.; Prakash, P.G.O.; Cristin, R.; Parthasarathy, G. Social bat optimisation dependent deep stacked auto-encoder for skin cancer detection. IET Image Process. 2020, 14, 4122–4131.
  79. Diame, Z.E.; Al-Berry, M.N.; Salem, M.A.-M.; Roushdy, M. Autoencoder Performance Analysis of Skin Lesion Detection. J. Southwest Jiaotong Univ. 2021, 56, 937–947.
  80. Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852.
  81. Wu, X.; Wang, H.-Y.; Shi, P.; Sun, R.; Wang, X.; Luo, Z.; Zeng, F.; Lebowitz, M.S.; Lin, W.-Y.; Lu, J.-J.; et al. Long short-term memory model—A deep learning approach for medical data with irregularity in cancer predication with tumor markers. Comput. Biol. Med. 2022, 144, 105362.
  82. Elashiri, M.A.; Rajesh, A.; Pandey, S.N.; Shukla, S.K.; Urooj, S.; Lay-Ekuakille, A. Ensemble of weighted deep concatenated features for the skin disease classification model using modified long short term memory. Biomed. Signal Process. Control. 2022, 76, 103729.
  83. Liao, J.; Liu, L.; Duan, H.; Huang, Y.; Zhou, L.; Chen, L.; Wang, C. Using a Convolutional Neural Network and Convolutional Long Short-term Memory to Automatically Detect Aneurysms on 2D Digital Subtraction Angiography Images: Framework Development and Validation. JMIR Public Health Surveill. 2022, 10, e28880.
  84. Mazoure, B.; Mazoure, A.; Bédard, J.; Makarenkov, V. DUNEScan: A web server for uncertainty estimation in skin cancer detection with deep neural networks. Sci. Rep. 2022, 12, 1–10.
  85. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118.
  86. Khan, M.A.; Sharif, M.; Akram, T.; Kadry, S.; Hsu, C.-H. A two-stream deep neural network-based intelligent system for complex skin cancer types classification. Int. J. Intell. Syst. 2022, 37, 10621–10649.
  87. Han, S.S.; Park, I.; Chang, S.E.; Lim, W.; Kim, M.S.; Park, G.H.; Chae, J.B.; Huh, C.H.; Na, J.-I. Augmented Intelligence Dermatology: Deep Neural Networks Empower Medical Professionals in Diagnosing Skin Cancer and Predicting Treatment Options for 134 Skin Disorders. J. Investig. Dermatol. 2020, 140, 1753–1761.
  88. Wan, J.-J.; Chen, B.-L.; Kong, Y.-X.; Ma, X.-G.; Yu, Y.-T. An Early Intestinal Cancer Prediction Algorithm Based on Deep Belief Network. Sci. Rep. 2019, 9, 1–13.
  89. Al-Antari, M.A.; Al-Masni, M.; Park, S.-U.; Park, J.; Metwally, M.K.; Kadah, Y.M.; Han, S.-M.; Kim, T.-S. An Automatic Computer-Aided Diagnosis System for Breast Cancer in Digital Mammograms via Deep Belief Network. J. Med. Biol. Eng. 2017, 38, 443–456.
  90. Farhi, L.; Kazmi, S.M.; Imam, H.; Alqahtani, M.; Rehman, F.U. Dermoscopic Image Classification Using Deep Belief Learning Network Architecture. Wirel. Commun. Mob. Comput. 2022, 2022, 2415726.
  91. Dorj, U.-O.; Lee, K.-K.; Choi, J.-Y.; Lee, M. The skin cancer classification using deep convolutional neural network. Multimedia Tools Appl. 2018, 77, 9909–9924.
  92. Refianti, R.; Mutiara, A.B.; Priyandini, R.P. Classification of melanoma skin cancer using convolutional neural network. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 409–417.
  93. Höhn, J.; Hekler, A.; Krieghoff-Henning, E.; Kather, J.N.; Utikal, J.S.; Meier, F.; Brinker, T.J. Integrating patient data into skin cancer classification using convolutional neural networks: Systematic review. J. Med. Internet Res. 2021, 23, e20708.
  94. Han, S.S.; Moon, I.J.; Lim, W.; Suh, I.S.; Lee, S.Y.; Na, J.-I.; Kim, S.H.; Chang, S.E. Keratinocytic Skin Cancer Detection on the Face Using Region-Based Convolutional Neural Network. JAMA Dermatol. 2020, 156, 29–37.
  95. Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolutional neural networks. Multimedia Tools Appl. 2020, 79, 28477–28498.
  96. Li, Y.; Fauteux, F.; Zou, J.; Nantel, A.; Pan, Y. Personalized prediction of genes with tumor-causing somatic mutations based on multi-modal deep Boltzmann machine. Neurocomputing 2019, 324, 51–62.
  97. Peter Soosai Anandaraj, A.; Gomathy, V.; Amali Angel Punitha, A.; Abitha Kumari, D.; Sheeba Rani, S.; Sureshkumar, S. Internet of Medical Things (IoMT) Enabled Skin Lesion Detection and Classification Using Optimal Segmentation and Restricted Boltzmann Machines. In Cognitive Internet of Medical Things for Smart Healthcare; Springer: Cham, Switzerland, 2021; pp. 195–209.
  98. Usmani, U.A.; Watada, J.; Jaafar, J.; Aziz, I.A.; Roy, A. A Reinforcement Learning Algorithm for Automated Detection of Skin Lesions. Appl. Sci. 2021, 11, 9367.
  99. Wang, S.; Hamian, M. Skin Cancer Detection Based on Extreme Learning Machine and a Developed Version of Thermal Exchange Optimization. Comput. Intell. Neurosci. 2021, 2021, 9528664.
  100. Sayed, G.I.; Soliman, M.M.; Hassanien, A.E. A novel melanoma prediction model for imbalanced data using optimized SqueezeNet by bald eagle search optimization. Comput. Biol. Med. 2021, 136, 104712.
  101. Afza, F.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.-S.; Cha, J. Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. Sensors 2022, 22, 799.
  102. Khan, I.U.; Aslam, N.; Anwar, T.; Aljameel, S.S.; Ullah, M.; Khan, R.; Rehman, A.; Akhtar, N. Remote Diagnosis and Triaging Model for Skin Cancer Using EfficientNet and Extreme Gradient Boosting. Complexity 2021, 2021, 5591614.
  103. Alabdulkareem, A. Artificial intelligence and dermatologists: Friends or foes? J. Dermatol. Dermatol. Surg. 2019, 23, 57.
  104. Shen, C.; Li, C.; Xu, F.; Wang, Z.; Shen, X.; Gao, J.; Ko, R.; Jing, Y.; Tang, X.; Yu, R.; et al. Web-based study on Chinese dermatologists’ attitudes towards artificial intelligence. Ann. Transl. Med. 2020, 8, 698.
  105. Maron, R.C.; Utikal, J.S.; Hekler, A.; Hauschild, A.; Sattler, E.; Sondermann, W.; Haferkamp, S.; Schilling, B.; Heppt, M.V.; Jansen, P.; et al. Artificial Intelligence and Its Effect on Dermatologists’ Accuracy in Dermoscopic Melanoma Image Classification: Web-Based Survey Study. J. Med. Internet Res. 2020, 22, e18091.
  106. Nelson, C.A.; Pérez-Chada, L.M.; Creadore, A.; Li, S.J.; Lo, K.; Manjaly, P.; Mostaghimi, A. Patient perspectives on the use of artificial intelligence for skin cancer screening: A qualitative study. JAMA Dermatol. 2020, 156, 501–512.
  107. Sreelatha, T.; Subramanyam, M.V.; Prasad, M.N.G. A Survey work on Early Detection methods of Melanoma Skin Cancer. Res. J. Pharm. Technol. 2019, 12, 2589.
  108. di Ruffano, L.F.; Dinnes, J.; Deeks, J.J.; Chuchu, N.; Bayliss, S.E.; Davenport, C.; Takwoingi, Y.; Godfrey, K.; O’Sullivan, C.; Matin, R.N.; et al. Optical coherence tomography for diagnosing skin cancer in adults. Cochrane Database Syst. Rev. 2018, 12, CD013189.
  109. Tan, T.Y.; Zhang, L.; Lim, C.P. Intelligent skin cancer diagnosis using improved particle swarm optimization and deep learning models. Appl. Soft Comput. 2019, 84, 105725.
  110. Gerke, S.; Minssen, T.; Cohen, G. Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial Intelligence in Healthcare; Academic Press: Cambridge, MA, USA, 2020; pp. 295–336.
  111. Rigby, M.J. Ethical Dimensions of Using Artificial Intelligence in Health Care. AMA J. Ethic- 2019, 21, E121–E124.
  112. Da Silva, M.; Horsley, T.; Singh, D.; Da Silva, E.; Ly, V.; Thomas, B.; Daniel, R.C.; Chagal-Feferkorn, K.A.; Iantomasi, S.; White, K.; et al. Legal concerns in health-related artificial intelligence: A scoping review protocol. Syst. Rev. 2022, 11, 1–8.
  113. Hu, T.; Chitnis, N.; Monos, D.; Dinh, A. Next-generation sequencing technologies: An overview. Hum. Immunol. 2021, 82, 801–811.
  114. Slatko, B.E.; Gardner, A.F.; Ausubel, F.M. Overview of Next-Generation Sequencing Technologies. Curr. Protoc. Mol. Biol. 2018, 122, e59.
  115. Lobl, M.B.; Clarey, D.; Higgins, S.; Sutton, A.; Hansen, L.; Wysong, A. Targeted next-generation sequencing of matched localized and metastatic primary high-risk SCCs identifies driver and co-occurring mutations and novel therapeutic targets. J. Dermatol. Sci. 2020, 99, 30–43.
  116. Brancaccio, R.N.; Robitaille, A.; Dutta, S.; Cuenin, C.; Santare, D.; Skenders, G.; Leja, M.; Fischer, N.; Giuliano, A.R.; Rollison, D.E.; et al. Generation of a novel next-generation sequencing-based method for the isolation of new human papillomavirus types. Virology 2018, 520, 1–10.
  117. Kadampur, M.A.; Al Riyaee, S. Skin cancer detection: Applying a deep learning based model driven architecture in the cloud for classifying dermal cell images. Inform. Med. Unlocked 2019, 18, 100282.
  118. Khan, M.A.; Akram, T.; Sharif, M.; Kadry, S.; Nam, Y. Computer Decision Support System for Skin Cancer Localization and Classification. Comput. Mater. Contin. 2021, 68, 1041–1064.
  119. Sharif, M.I.; Khan, M.A.; Alhussein, M.; Aurangzeb, K.; Raza, M. A decision support system for multimodal brain tumor classification using deep learning. Complex Intell. Syst. 2021, 8, 3007–3020.
  120. Abdar, M.; Acharya, U.R.; Sarrafzadegan, N.; Makarenkov, V. NE-nu-SVC: A New Nested Ensemble Clinical Decision Support System for Effective Diagnosis of Coronary Artery Disease. IEEE Access 2019, 7, 167605–167620.
  121. Ray, P.P.; Dash, D.; De, D. A Systematic Review of Wearable Systems for Cancer Detection: Current State and Challenges. J. Med. Syst. 2017, 41, 180.
  122. Gupta, A.K.; Bharadwaj, M.; Mehrotra, R. Skin Cancer Concerns in People of Color: Risk Factors and Prevention. Asian Pac. J. Cancer Prev. 2016, 17, 5257–5264.
  123. Sun, M.D.; Kentley, J.; Wilson, B.W.; Soyer, H.P.; Curiel-Lewandrowski, C.N.; Rotemberg, V.; ISIC Technique Working Group. Digital skin imaging applications, part I: Assessment of image acquisition technique features. Ski. Res. Technol. 2022, 28, 623–632.
  124. Barata, C.; Marques, J.S.; Celebi, M.E. Improving dermoscopy image analysis using color constancy. In 2014 IEEE International Conference on Image Processing (ICIP); IEEE: Washington, DC, USA, 2014; pp. 3527–3531.
  125. Salvi, M.; Branciforti, F.; Veronese, F.; Zavattaro, E.; Tarantino, V.; Savoia, P.; Meiburger, K.M. DermoCC-GAN: A new approach for standardizing dermatological images using generative adversarial networks. Comput. Methods Programs Biomed. 2022, 225, 107040.
  126. Watson, M.; Holman, D.M.; Maguire-Eisen, M. Ultraviolet Radiation Exposure and Its Impact on Skin Cancer Risk. Semin. Oncol. Nurs. 2016, 32, 241–254.
  127. Wolner, Z.J.; Yélamos, O.; Liopyris, K.; Rogers, T.; Marchetti, M.A.; Marghoob, A.A. Enhancing Skin Cancer Diagnosis with Dermoscopy. Dermatol. Clin. 2017, 35, 417–437.
  128. A Comparison of Polarised and Nonpolarised Dermoscopy|DermNet. Available online: https://dermnetnz.org/topics/a-comparison-of-polarised-and-nonpolarised-dermoscopy (accessed on 30 January 2023).
  129. Hone, N.L.; Grandhi, R.; Ingraffea, A.A. Basal Cell Carcinoma on the Sole: An Easily Missed Cancer. Case Rep. Dermatol. 2016, 8, 283–286.
  130. Pala, P.; Bergler-Czop, B.S.; Gwiżdż, J. Teledermatology: Idea, benefits and risks of modern age–a systematic review based on melanoma. Adv. Dermatol. Allergol. Postępy Dermatol. I Alergol. 2020, 37, 159–167.
  131. Veronese, F.; Branciforti, F.; Zavattaro, E.; Tarantino, V.; Romano, V.; Meiburger, K.; Salvi, M.; Seoni, S.; Savoia, P. The Role in Teledermoscopy of an Inexpensive and Easy-to-Use Smartphone Device for the Classification of Three Types of Skin Lesions Using Convolutional Neural Networks. Diagnostics 2021, 11, 451.
  132. Skin Cancer ISIC. Available online: https://www.kaggle.com/datasets/nodoubttome/skin-cancer9-classesisic (accessed on 30 November 2022).
  133. Dermatology Data Set. Available online: https://archive.ics.uci.edu/ml/datasets/Dermatology?ref=datanews.io (accessed on 30 November 2022).
  134. The HAM10000 Dataset, a Large Collection of Multi-Source Dermatoscopic Images of Common Pigmented Skin Lesions. Available online: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T (accessed on 30 November 2022).
More
Video Production Service