Deep Learning Algorithms and Their Applications: Comparison
Please note this is a comparison between Version 3 by Camila Xu and Version 2 by Camila Xu.

Deep learning is a type of machine learning that uses artificial neural networks to mimic the human brain. It uses machine learning methods such as supervised, semi-supervised, or unsupervised learning strategies to learn automatically in deep architectures and has gained much popularity due to its superior ability to learn from huge amounts of data.

  • artificial neural networks (ANN)
  • deep learning
  • autoencoders (AE)

1. Introduction

Deep learning is a machine learning technique that teaches computers and devices logical functioning. It is inspired by the structure of the human brain. Deep learning originated as artificial neural networks (ANNs) and has developed far more efficiency after decades of research and development compared to the other machine learning algorithms [1].
In the early stages of the development of neural networks, researchers aspired to create a system that mimicked the functions of the human brain. In 1943, McCulloch and Pitts attempted to explain how the brain could create highly complex patterns by using neurons [2]. They invented the MCP (McCulloch Pitts neuron) model, which had a significant impact on the development of artificial neural networks. They implemented threshold logic in their model to simulate the human thought process but not to learn. Since then, the development of deep learning has been slow but steady, with several significant milestones along the way. In 1958, Frank Rosenblatt invented the perceptron, the first prototype of the modern neural networks, which consisted of two layers of processing units that are commonly thought of as a basic mechanism for recognizing simple patterns [3]. Rather than undergoing further development and research, the evolutionary history of neural networks and artificial intelligence underwent a winter after 1969, after the MIT professors Minsky and Papert proved the limitations and failings of the perceptron [4].
The standstill was broken when the backpropagation algorithm emerged in 1974, invented by Werbos [5]. It is considered a significant milestone in neural network development. In 1980, Fukushima [6] introduced the “Neocognitron,” which is regarded as the ancestor of convolutional neural networks, followed by the Boltzmann machine, invented by Hinton et al. in 1985 [1], and the recurrent neural network, invented in 1986 by Jordan [7]. In 1998, Yan LeCun introduced the convolutional neural network with backpropagation for document analysis [8]. Deep learning has made major progress since 2006, when Hinton introduced the concept of deep belief networks (DBNs) [9][10]. These involve a two-stage plan: pre-training and fine-tuning. This technique allowed researchers to train neural networks much more deeply than previous methods.
Deep learning algorithms aim to draw similar conclusions as humans do by continuously analyzing data according to a given logical structure. Deep learning allows machines to manipulate images, text, or audio files like humans to accomplish human-like tasks. To achieve this, deep learning uses a multi-layered structure of algorithms called neural networks. As the name suggests, deep learning involves going deep into multiple layers of a network, which include a hidden layer. As one goes deeper, more complex information is extracted. Deep learning relies on iterative learning methods, which expose machines to very large datasets. It helps computers to learn to identify traits and adjust to changes. Machines are able to learn differences among datasets, understand the logic, and reach reliable conclusions after repeated exposures [1].
A review on deep learning has been presented in [11]. In [11], the following information is provided: Computational models consist of multiple layers for processing. They are permitted to learn data representations with various levels for abstraction by deep learning. By using these techniques, the state-of-the-art has been enhanced in several fields, including visual object recognition, speech recognition, genomics, and discovery of drugs, along with plenty of others. Backpropagation is applied by deep learning in order to find the complex structure within a huge dataset, and this for specifying the way that internal parameters of machine have to be altered by the machine, and these are employed to calculate the representation within every layer depending on the representation within the prior layer. RNNs are mainly used for sequential data, such as speech and text, whereas CNNs are used for the processing of images, speech, audio, and video.
In recent years, a large number of studies, such as [9][10][11][12], have made use of deep learning and demonstrated its capabilities in many respects, including healthcare and health informatics, such as in bioinformatics, medical imaging, pervasive sensing, medical informatics, and public health [13][14][15][16][17][18][19][20][21][22][23][24][25]; motor imagery classification [26], for the performance improvement of sensorless FOC [27]; and a new method developed for a retrofitted self-tuned controller with FPGA [28] and for a self-tuning neural network PID [29].

2. Related Works

Deep learning has been widely implemented in various areas in health informatics, such as bioinformatics, medical imaging, pervasive sensing, medical informatics, and public health [9]. An analysis of deep learning algorithms reported in the literature related to the health sector, especially COVID-19 prediction, classification, and detection strategies is presented below. Deep learning is able to be employed reliably with respect to physiological signals obtained via electromyogram (EMG), electroencephalogram (EEG), electrooculogram (EOG), or electrocardiogram (ECG). For these purposes, the most common algorithms used in processing these physiological signals are RNNs and one-dimensional CNNs [9].
The author of [12] proposed a sonar target recognition. Two-layer backpropagation networks are trained to distinguish between reflected sonar signals from rocks and metal cylinders at the bottom of the Chesapeake Bay. A total of 60 input units and 2 output units are involved. The input pattern is based on the Fourier transform of the raw time signal. The best performance was obtained with 12 hidden units (nearly 100% training set accuracy).
The authors of [30] proposed a solution for medical practitioners handling a COVID-19 crisis that implements protocols of investigation for the first patient infected by this disease, which will also be beneficial to epidemiologists for making a decision about virus spread. This research presents a framework based on deep learning (specifically, stacked auto-encoders in medical image retrieval) in its feature-extraction phase. During retrieval tasks, this method reduces the image’s size to a small, compact representation, allowing for faster image comparisons. In a content-based image retrieval system, the importance of the auto-encoder approach can be seen, since it achieves good recognition accuracy at 80% for COVID-19 digital images, given the capability of the stacked encoders. Autoencoder algorithms have also been developed for cancer diagnosis. Sparse autoencoders are presented for microaneurysm detection in fundus images as a part of a diabetic retinopathy strategy. Additionally, various autoencoder-based learning methods have been developed for use in Alzheimer’s disease prediction based on functional magnetic resonance images (FMRI), 3D brain reconstruction, cell clustering, and human behavior monitoring [31]. In [13], researchers examined various autoencoding architectures that integrated multiple types of data (e.g., multi-omic and clinical) from cancer patients. They explored these approaches and provided a clear framework for developing systems which can be used for investigating cancer characteristics and translating some of the results into clinical applications. Using these networks, they described how to design, build, and apply integrative analyses of heterogeneous data on breast cancer. These approaches yield accurate and stable diagnoses by producing relevant data representations.
RBM architectures can be applied to manifold brain MRIs used in Alzheimer disease detection, segmenting multiple sclerosis lesions in multi-channel 3D MRIs, assigning diagnosis automatically according to the clinical status of patients, suicide risk prediction for mental health patients using low-dimensional approaches to the medical aspects embedded in electronic health records (EHRs), and determining photoplethysmography signals for health monitoring [14]. A deep learning framework for the classification of complex hyperspectral medical images is presented in this study. To accomplish this, a deep Boltzmann machine (DBM) architecture of the bipartite structure was developed as an unsupervised generative model. The implementation is based on a three-layer unsupervised network with a backpropagation structure. In the present dataset, image patches are collected and classified into two classes, namely, non-informative and discriminative classes. The spatial information is used for classification and spectral-spatial representations of class labels are created. The accuracy, false-positive predictions, and sensitivity are calculated for the fully-connected network based on the labelled classes. With the proposed cognitive computation technique, an accuracy of 95.5% and sensitivity of 93.5% were achieved. The DBM model provides a significant improvement in the computer-aided diagnosis of cancerous regions in HIS (hyperspectral imaging) images. This method separates two types of image patches, informative and non-discriminative, in the first phase. By calculating the accuracy and success rate of DBM in the second phase of learning, the performance was confirmed. This method was 6.1% more accurate than CNN. The results showed that the unsupervised DBM was best suited for identifying cancer regions from complicated 3D hyperspectral images [15].
DBNs are used in the modeling of compound–protein interactions, anomaly detection, biological parameter monitoring, obstacle detection, sign language recognition, detecting lifestyle diseases, and modeling infectious disease epidemics [31]. In [16], deep belief networks were used to classify normal and COVID-19 patients using medical images of chest X-rays. The proposed system successfully identifies COVID-19 cases with a 90% accuracy rate. The diagnosis is usually based on the presence of a white patchy shadow in the lungs. By using a Gaussian filter (to remove the noise), it is able to detect these diseases through these medical images. The medical images are then separated by separating the important section of the lungs from the others. Threshold-based segmentation is performed to separate gray pixels from the others, and the number of gray pixels in lungs affected by COVID-19 is very low in comparison to normal lungs.
CNN algorithms can be employed in the prediction of risk of osteoarthritis by using the automatic segmentation of knee cartilage MRIs, when using retinal fundus photographs to detect diabetic retinopathy, in dermatologist-level classification of skin cancer, in congestive heart failure prediction, for predicting chronic obstructive pulmonary disease using longitudinal EHRs, for predicting the quality of sleep using physical activity data during awake time, in electroencephalogram analysis, and for estimating the prevalences of various chromatin marks [14]. The early stages of Alzheimer’s disease (AD) are associated with mild cognitive impairment (MCI).
For the early diagnosis of Alzheimer’s disease, some authors proposed a new voxel-based hierarchical feature extraction method (VHFE). The entire brain was divided into 90 regions of interest (ROIs) using an automatic anatomical labeling template (AAL). They divided the uninformative data by selecting the informative voxels in each ROI, using a baseline of the voxels’ values to make a vector. On the basis of the correlations between voxels of different groups, the features for the first stage were selected. In order to learn deeply hidden features, the brain feature maps made up of the voxels of each subject were fed into a convolutional neural network (CNN). The results show that the proposed method is robust, having a promising performance in comparison to the current state-of-the-art methods [18].
In [32], processing data that exhibit natural spatial invariance (e.g., images whose meanings do not change under translation) is shown to have grown to be central in the field (CV) [32]. The use of deep learning systems could help physicians by providing second opinions and flagging concern areas. In object-classification, CNNs learn to classify objects contained an images and have achieved human-level performance. They are initially trained on a massive dataset, unrelated to the task of interest, to learn the natural statistics in images (curves, straight lines, colorations, etc.) before being further fine-tuned on a much smaller dataset related to the task of interest (e.g., medical images). The higher-level layers of the algorithm are retrained to distinguish between diagnostic cases. CNNs demonstrated strong performance in transferring learning [33]. Clinicians have started to utilize image segmentation and object detection for urgent and easily missed cases, such as identifying large-arteria occlusions in the brain using radiological images [19], during which severe brain damage may occur in a short period of time (a few minutes). Moreover, cancer histopathology reading can be supplemented with CNNs trained to detect mitotic cells [20] or tumor regions [21]. CNNs have been used to account for survival probability [22] by discovering the biological features of tissues.
Long short-term memory (LSTM) [34] RNNs are being tested in medicine for multiple purposes, ranging from clinical measurements of patients in pediatric intensive care units, a dynamic memory model for predictive medicine based on patient history, and prediction of disease onset from longitudinal laboratory tests, to the de-identification of patients’ clinical notes [14]. RNN-based language translation [35] has been used to directly translate speech in one language to text in another. If adapted for electronic health records (EHRs), this technique could translate a patient–provider conversation into transcribed text. Classifying the attributes and status of each medical entity from the conversation while accurately summarizing the dialogue is key. The next generation of automatic speech recognition and information extraction models will likely help clinical voice assistants to accurately record patient visits [35]. Despite showing promise in early human–computer interaction experiments, these techniques have yet to be widely used in medical practice.
The brain–computer interface (BCI) allows people and machines to interact by analyzing brain activity. Electroencephalography (EEG) is the most viable and noninvasive method of obtaining such information. It has a low signal-to-noise ratio and low spatial resolution. A new method has been proposed using a combination of blind source separation (BSS) to estimate independent components, continuous wavelet transform (CWT) to represent these signals in 2D, and a convolutional neural network (CNN) for classification. The experimental results of 94.66% are competitive with recent state-of-the-art techniques. Regarding the architecture of the CNN, researchers have found that hyper-parameters such as the size of the kernel and the kernel stride of each convolutional layer significantly influence network performance, whereas the number of convolutions has little impact on the final accuracy [26].
In [27], the authors describe a method for developing an open-architecture controller based on reconfigurable hardware in an open-source framework for servo applications. The servo system is a feedback-control system characterized by the outputs of position, speed, and acceleration. The authors have implemented a genetic algorithm for online self-tuning with an emphasis on both high-quality servo control and vibration reduction during linear motion system positioning. The controller was developed using free tools from graphical user interface to logical implementation. Using this approach, it is also possible to make modifications and add updates easily, which lessens the probability of obsolescence. A graphical user interface developed in Python provides speed profiling, controller auto-tuning, measurement of the main parameters, and monitoring of the vibrations of the servo system. Additionally, a method for auto-tuning of the used PID (proportional–integral–derivative) controller has been developed for efficiently tracking trajectory in the linear movement system. The modules are usable by any FPGA (field programmable gate array) manufacturer. Open-source tools have higher performance, due to their distribution and management of logical resources. Moreover, it is multiplatform and could be run on a Linux, Windows, or Mac OS. The authors also developed a Python GUI for monitoring system variables and vibrations generated by the mechanical system. As well as calculating the trapezoidal velocity profiles, this GUI configures gains through serial communication. To test different velocity profiles, the profile calculation algorithm can be easily replaced. In the translational mechatronics system, the PID controller can follow any path with an error of less than 0.2%.
By using the vibration-monitoring system, faults in a translational mechatronic system can be detected. It can be used as a type of preventative maintenance [28]. That research focuses on real-time and online electrical parameter estimation using CMAC-ADALINE (cerebellar model articulation controller adaptive linear neuron) and the standard FOC (field-oriented control) scheme, the former being added to improve IM (induction motor) driver performance and lengthen the lifetime of the driver and induction motor. Using two types of neural networks, the authors estimated both rotor speed and rotor resistance for an induction motor. In that paper, they proposed a new kind of estimator for control schemes for neural networks and control theory. Moreover, the IM speed and rotor resistance estimations were validated, and the IM driver’s performance was improved. A hybrid neural network estimator has been developed that takes advantage of two different types of neural network designs, and it has shown to be effective, as expected. By adjusting its weights, this estimator updates speed and rotor resistance, which improves the performance of the FOC algorithm. This algorithm was easily implemented in a real-time application over a three-phase IM, showing good speed and resistance tracking and a minimal error rate.
Based on a backpropagation artificial neural network, the aforementioned paper presented a self-adjusting PID controller [28]. According to the desired output, the network calculates the appropriate gains according to the transient and stationary parts of the step response of the system. Besides using the error for network training, the maximum desired values of overshoots, settling times, and stationary errors were also used as input data for the network. In order to obtain the dynamic response data associated with PID gains, an offline training database was created using genetic algorithms. Genetic algorithms allow obtaining data in a wide range of operating ranges using only stable gains combinations. The database was used for training. Adapting to the error and the desired response, the neural network then estimated an appropriate gain combination. The method’s performance was assessed by controlling the speed of a direct-current motor. The results show an average error rate of 4% between the database request and the response from the system. However, in 86% of the combinations of the test dataset (1544 connections), the results predicted by the network were not unstable [29].
In recent years, many studies, such as [23], have made use of deep learning and demonstrated its ability to prevent the sudden failure of lithium-ion batteries. Lithium-ion batteries (LIBs) have high efficiency and are low cost, but their instability and varying lifetimes remain challenges. Researchers have worked to develop ways of predicting the remaining useful life (RUL) of lithium-ion batteries, especially using data-driven approaches. A higher resolution of inter-cycle aging for faster and more accurate predictions, by considering temporal patterns and cross-data correlations in the raw data, specifically, terminal voltage, current, and cell temperature, is sought. An in-depth analysis of the deep learning models using the uncertainty metric, t-SNE of features, and various battery related tasks was presented in [23]. The proposed framework significantly boosted the remaining useful life prediction (25X faster) and resulted in a 10.6% mean absolute error rate.
RUL predictions of LIBs are of great importance to the health management of electric vehicles and hybrid electric vehicles. Fluctuations and nonlinearity during battery degradation result in difficulties in both RUL prediction accuracy and model adaptability. To face the challenge, ref. [24] proposed a sequence decomposition and deep learning integrated prognostic approach for the RUL predictions of LIBs. To separate the local fluctuations and the global degradation trend from the battery aging data, complementary ensemble empirical mode decomposition and principal component analysis are applied. Since the long- and short-term memory (LSTM) fully connected (FC) structure makes good use of offline and online data information, an LSTM neural network combined with FC layers was designed as a transfer learning model. The hyper-parameter optimization and fine-tuning strategy of the model were developed based on offline training data. The illustrative results demonstrate that the proposed approach can achieve adaptive, accurate, and robust predictions for both RUL and capacity trajectory.
By analyzing routine computed tomography, the authors proposed an automatic deep learning system for COVID-19 diagnostics and prognostics. Deep learning is a convenient tool for diagnosing COVID-19 and identifying patients at risk, which may aid in resource optimization and early prevention of disease before severe symptoms occur. CT scanning can be acquired within minutes if a patient is suspected. Once this deep learning system has been applied, it is possible to predict whether a patient has COVID-19. The deep learning system also predicts the patient’s prognostic situation if they are diagnosed with COVID-19, which can be used to identify high-risk patients who require urgent medical attention. The system is fast and does not require human-assisted image annotation, increasing clinical value and enhancing robustness. Deep learning takes less than 10 s to produce a prognostic and diagnostic prediction for a chest CT scan of a patient [16]. The authors of [25] reported on how experts (medical or otherwise) and technicians have used deep learning techniques to combat the COVID-19 outbreak. The rapidly development of artificial intelligence (AI) in the area of medical image analysis has also helped to combat COVID-19 by providing high quality diagnostic results and by reducing or eliminating the need for manpower when employed. For COVID-19 diagnoses using CT and X-ray samples, deep-learning-based support systems have been developed. Several of these systems have been introduced using customized networks, and some were derived from pre-trained models with transfer learning. Similarly, machine learning and data science are also actively used for COVID-19 diagnosis, prognosis, prediction, and outbreak forecasting. In addition, big data, smartphone technology, and the Internet of Things (IoT) enable innovative solutions to combat the spread of COVID-19. Researchers have used multiclass classification methods to distinguish images of patients with infectious diseases, such as COVID-19 viral pneumonia, bacterial pneumonia, fungal pneumonia, SARS, MERS, influenza, and tuberculosis, from those of healthy people. Multiclassifiers are more accurate than binary classifiers in detecting COVID-19 cases. CNNs and DNNs (deep neural networks) are the most significant classification methods for COVID-19 detection, followed by SVM, random forest, KNN (K nearest neighbors), and LSTM. Furthermore, the CNN is the most widely used classifier for COVID-19 diagnosis regarding machine learning/deep learning techniques. Machine learning/deep learning-based approaches can significantly enhance intelligent diagnosis systems, which has promise for healthcare professionals seeking to detect the virus quickly and reliably. Furthermore, they will eliminate manual errors made by physicians and radiologists during diagnosis. Additionally, they will facilitate more accurate and time-efficient diagnoses. Researchers have used X-rays, CT images, RT-PCR, and clinical blood data to investigate COVID-19’s prognosis and anomalies. In the most recently mentioned study, the highest accuracies in detecting COVID-19 were achieved by CNN, DNN, SVM, KNN (K nearest neighbors), and R.F, with 99%, 99.7%, 99.68%, 93.41%, and 95%, respectively. Besides medical research and radiology, machine learning/deep learning techniques have made astounding performance gains in multiple domains. In conclusion, machine learning and deep learning techniques could play a significant role in predicting, classifying, screening, and minimizing the spread of the COVID-19 pandemic [25]. The COVID-DeepNet system is one of the accurate methods used to diagnose infection with COVID-19, and it depends on chest radiography images. DBN algorithms associated with other deep learning architectures, such as CBDN, were trained on the top of pre-processed chest radiography images to reduce overfitting and improve the generalization capabilities of the adopted deep learning approaches [36].
Numerous current applications have focused on using deep learning for diagnosing EEG signals, aiming at the identification and prediction of several challenging brain activities. Due to the wide field of applications and the results of EEG in terms of efficiency and reliability, the information it provides has become a key element in the health sector. From the reviewed works, it was found that EEG data—in combination with processing techniques (Fourier transform (FT), fast Fourier transform (FFT), short-time Fourier transform (STFT), wavelet transform (WT)), and machine learning tools such as support vector machines (SVMs) and neural networks (NN)—achieve an efficiency greater than 90%, thus becoming a competitive tool for solving problems in its field of study. The authors of [37] used an extreme learning machine (ELM) and achieved 95%. EEGs measure brain electrical activity without invasive procedures and have numerous medical applications, one of which is the detection of neurodegenerative diseases. Dementia diseases are rapidly increasing and have become an alarming problem for the health sector. This is a growing challenge for health systems. Various parameters or measurements can be extracted during the processing of EEG signals, such as frequency spectra; time–frequency; values such as entropy and fractal average; and combinations of more than one parameter. During the processing stage, the objective is to extract relevant information that allows the identification of EEG patterns/biomarkers that, during the classification stage, contribute to increasing efficiency [37].
The deep learning algorithms presented can be implemented using Keras [38][39][40][41][42].

3. A Comparison of Deep Learning Algorithms

Table 1 highlights the main information and is arranged in an emphasized manner to summarize the representation of the deep learning algorithms that have been presented in this research, including their advantages, disadvantages, and applications, and whether the algorithm is supervised or unsupervised.
Table 1. Deep learning algorithms used in different domains with accuracy.
Reference No. Algorithm Used Domain Accuracy Rate
[30] Back propagation Sonar target recognition Nearly 100%
  • Might not perform well when applied to testing data.
  • Could be sensitive to noisy data.
  • It often gets stuck in local optima
  • May experience overfitting if the training set is not big enough.
  • Optical character recognition.
  • Useful for image or speech recognition
[31] Stacked autoencoders COVID-19 digital images recognition
Autoencoders80%
Unsupervised
  • Can learn nonlinear transformations with non-layer activation function and multiple layers
  • Does not have to learn dense layers
  • More efficient to learn several layers with an autoencoder
  • Data specific
  • Lossy
  • Data denoising
  • Dimensionality reduction
  • Image reconstruction
  • Image colorization
  • Anomaly detection
  • Data compression
  • Feature variation
[15] Deep Boltzmann machine Classification of hyperspectral medical images for the diagnosis of cancerous regions
Variational Autoencoders Unsupervised95.5%
  • A better organization of the latent space results in high quality images
  • Samples from image tend to be blurry
  • For learning latent representations
  • To draw images
  • Achieve state-of-the-art results in semi-supervised learning and interpolate between sentences
[16] Deep belief networks Classification of normal and COVID-19 patients using medical images of chest X-rays 90%
RBMs

RBMs
Unsupervised
  • Can do pattern
  • completion
  • Can be used as feature extractors
  • Can be used to train other models
  • Can be stacked to pre-train a deeper feedforward neural model
  • RBMs are tricky to train well
  • Unable to track the loss that is required (let alone take derivatives with respect to it).
  • Dimensionality reduction
  • Classification
  • Regression
  • Collaborative filtering
  • Feature learning
  • Topic modeling
[18] CNN For diagnosis of Alzheimer’s disease by a voxel-based hierarchical feature extraction method 97%
DBNs

DBNs
Unsupervised [26] CNN For motor imagery classification based on sorted blind source separation, continuous wavelet transform, and convolutional neural network 94.66%
[25] CNN To detect COVID-19 99%
[25] DNN To detect COVID-19 99.7%
[25] SVM To detect COVID-19 99.68%
[25] KNN To detect COVID-19 93.41%
[36] DBN and convolutional DBN To detect COVID-19 in chest X-ray images 99.93%
[37] SVM and NN To detect dementia diseases 90%
As listed in Table 2, Backpropagation is a simple supervised algorithm which is easy and fast to program. However, it needs excessive time for training and is very sensitive to noisy data. It can also be used for speech recognition, face recognition, character recognition, etc. An autoencoder is an unsupervised learning algorithm which is an efficient algorithm for learning several layers—mainly used for dimensionality reduction. However, it is a lossy technique. A variational autoencoder is an autoencoder that forms a high dimensional dataset into a distribution. The implementation of a variational autoencoder is more challenging than the implementation of an autoencoder. However, they produce high quality images. The main application of variational autoencoder is to generate new data related to the source data. RBMs are unsupervised learning algorithms useful for dimensionality reduction, classification, regression, feature learning, etc. They are restricted in terms of the connections between the input layers and the hidden layers. They are also very tricky to train, and the loss cannot be tracked. DBNs are also unsupervised learning algorithms belonging to a generative graphical model. They identify deep patterns in the input data. DBNs can learn an optimal set of parameters quickly even for models with large numbers of parameters and layers with nonlinearity, but they are very slow and inefficient. Their main applications are classification and clustering. CNNs are supervised learning algorithms which are designed to learn the spatial hierarchies of features automatically through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. They are easy to train and implement, and they are efficient in pre-training and feature extraction. However, they need large amounts of memory to store their intermediate results, and their main uses are in video and image recognition, natural language processing, image classification, etc. RNN is a supervised learning algorithm which can relearn from old data as new data are acquired, and this is beneficial in for time series. The nature of RNNs makes their calculations slow. RNNs have many applications, such as machine translation, speech, and voice recognition, handwriting recognition, etc. GANs have no need to utilize Markov chains, and have many applications, such as image analysis. The GAN exhibits an inverse relationship regarding to the cost functions of the discriminator and the generator: when the cost function of the discriminator is increased, the cost function of the generator is decreased, and vice versa. Capsnets can be applied in medical applications, such as brain tumor classification. Capsnets uses few parameters. Capsule networks face the scalability of complex data. A transformer uses methods of attention, and it does not depend upon convolutions or recurrence. Attention utilizes fixed-length text strings; text chunking performs context fragmentation. Transformers have several applications, such as sequence transduction, that relate to language translation. ELMo can be applied in sentiment analysis, question answering, etc. Both building embeddings and training process are very slow because of the complex nature of Bi-LSTM architecture. The word representations for deep context and letter level could be adapted with works that are complex. A model of language representation called BERT is computationally costly at inference time. BERT can be applied to language inference and answering of questions. The first categorization for models of attention based on four dimensions was proposed in [43]. These dimensions are input representation, distribution function, compatibility function, and input and/or output multiplicity. Attention in natural language processing faces challenges, such as attention analysis of model evaluation and attention for deep networks investigations. Attention in natural language processing has several uses, such as feature selection, auxiliary tasks, building of contextual embedding, and others.
Table 2. Comparison of deep learning algorithms [9][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67].
Algorithm Supervised/

Unsupervised
Advantages Disadvantages Applications
Backpropagation Supervised
  • Easy to implement.
  • Flexible and efficient.
  • learn an optimal set of parameters quickly, even for models with large numbers of parameters and layers with nonlinearity.
  • It is possible to compute the output values of variables in the lowest layer using an approximate inference procedure.
  • The limitation of the approximate
  • inference procedure to a single bottom-up pass.
  • It never readjusts with the other layers or parameters of the network.
  • It is very slow and inefficient.
  • They are used to recognize, cluster, and generate images, video sequences and motion capture data.
  • They can be used in
  • classification and feature learning
CNNs

CNNs
Supervised
  • Saves memory
  • Uses only fewer parameters
  • Easy to train
  • Able to learn relevant features from an
  • image or video at
  • different levels
  • Can design 2D structure from an input image by local
  • connections and weights
  • Better performance
  • Are very efficient in feature extraction. They are efficient at pre training
  • Are easy to implement from a practical perspective.
  • Require an
  • enormous amount of memory to store all the intermediate results.
  • They often get confused by images and mistakenly categorize objects during their early stages of training.
  • They would be of no use if the data were completely
  • unstructured, such as in an Excel spreadsheet.
  • Video and image recognition
  • Natural language processing
  • Image classification and analysis.
RNNs

RNNs
Supervised
  • An RNN is capable of processing inputs of any length.
  • An RNN model is
  • designed to remember all the previous
  • information over time which is very useful in time series prediction.
  • Model size does not increase as input size increases.
  • It is possible to share weights across the time steps.
  • It gives an effective pixel neighborhood with the help of convolutional layers.
  • In contrast to feedforward neural networks, RNNs can use their internal memory to process an arbitrary series of inputs.
  • The computation of this neural network is slow due to its
  • recurrent nature.
  • Training is a difficult process.
  • With the use of ReLU or tanh activation functions, it is very tedious to process long sequences.
  • It faces problems such as exploding or vanishing gradients.
  • Machine translation
  • Robot control
  • Speech or voice recognition
  • Composing music
  • Learning grammar
  • Predicting the
  • location of proteins in a cell
  • Prediction in business management and administrative tasks
  • Rhythm learning
  • handwriting
  • recognition
GANs Unsupervised
  • No requirements to use Markov chains, gradients can be achieved solely using backpropagation.
  • Through learning, there are no requirements for inference.
  • Through the model, an enormous diversity of functions has the ability to be integrated.
  • The models of adversarial perhaps obtain a statistical advantage from the network of generator that is not modified immediately with examples of data.
  • Degraded and extremely sharp distributions can be represented by the adversarial networks. In terms of the approaches of Markov chains demand a bit blurry distribution aiming to mix among methods.
  • The shape of distribution probability for the generator model has no need to define it.
  • The sampling related to generated data can be parallelized.
  • Through presenting a lower bound like in VAE, there is no requirement for estimating a probability.
  • It can offer satisfactory results like those better than VAE.
  • No clear representation for pg(x).
  • Through training, D has to be synchronized with G in good way.
  • Quietly like the negative chains for a Boltzmann machine should be remained updated among the steps of learning.
  • Increasing in the cost function of the discriminator makes decreasing in the cost function of the generator and vice versa.
  • Failing of convergence for the game of GAN, and instability could be happened.
  • The problem of mode collapse.
  • Applications of image.
  • Domain adaptation.
  • Applications of sequential data based.
  • Improvement of recognition and classification.
  • Miscellaneous applications like discovery of drug.
  • Development of molecule within oncology.
Capsnets Unsupervised
  • Invariance of viewpoint.
  • Less number of parameters.
  • Better generalization to new viewpoints.
  • Defense versus the attacks of white-box adversarial.
  • Validatable.
  • Fewer volume of training data.
  • Complex data scalability
  • Capsules required to model everything.
  • The representation of entities is forced by the structure.
  • Applying capsnet on a new dataset is normally requested:
  • Applying a new loss function.
  • Crowding.
  • Implementation that is not optimal.
  • Capsnets can be applied in medical area such as it is employed for classifying the kind of brain tumor.
  • Capsnet is also used with RNN in the works of sentiment analysis.
Transformer semi-supervised
  • Transformer is depending only upon the attentions methods, and it is excluding on its reliance on either recurrence or convolutions.
  • Attention is utilized by transformers to increase the speed, which using it training models can be done.
  • For particular tasks, the translation model of Google Neural Machine is outperformed by transformers.
  • The way that the Transformer can lend itself into parallelization.
  • Fixed length strings of text can solely be used by the attention.
  • Context fragmentation is caused by text chunking.
  • At layers of encoder-decoder attention, both memory keys and values arrive from the encoder outcome, while the source of the queries is the prior decoder layer.
  • The layers of self-attention are included in the encoder.
  • Within a layer of self-attention, the entire keys, values and queries are come from the outcome of the precede layer in the encoder.
  • In the encoder, every position has the ability to attend to the complete positions within the encoder former layer.
  • In the decoder, the layers of self-attention permit every position within the decoder to attend to the entire positions within the decoder up to and covering that position.
  • Sequence transduction, which means language translation, the mission of analyzing the classic language for parsing of syntactic constituency, and dissimilar patterns of inputs and outputs like resolution of co-reference, video and images.
  • Videos and images are applications for this algorithm.
ELMo Unsupervised
  • The representations of word for letter level and the context that is deep can be adapted with further complicated works.
  • Based on many tasks, it works much better comparing to simple embeddings. The easiness to access the pretrained form.
  • The complicated nature of Bi-LSTM architecture leads to that the training process and building embeddings are quite slow.
  • Comparable models like Flair works better.
  • Struggling with the dependencies of context that are with long term.
  • Sentiment analysis.
  • Question answering.
  • Textual entailment.
  • Named entity extraction.
  • Conference resolution.
  • Semantic role labeling.
BERT Unsupervised
  • BERT is a bidirectional, deep bidirectional representations can be pre-trained by masked language models used via BERT.
  • BERT model obtains the performance of the state-of-the-art using a massive set of token and sentence level missions, and it has better performance than several mission-particular architectures.
  • Building of vectors or contextualized word embeddings by BERT.
  • At inference time BERT is quite expensive computations. In other words, it is costly.
  • Language inference and answering of questions
Attention in Natural Language Processing Supervised
  • The first classification for models of attention in terms of four dimensions:
    Input representation.
    distribution function.
    Compatibility function.
    Input and/or output multiplicity.
  • Attention for deep networks investigation.
  • Attention for sample weighing and outlier detection.
  • Analysis of attention for evaluation of model.
  • Unsupervised learning with attention.
  • Neural symbolic learning and reasoning.
  • Feature selection, and an example on the task is multimodal tasks.
  • Auxiliary task, and examples on the task are semantic role labelling, and visual question answering.
  • Building of contextual embedding, examples on the task are sentiment analysis, information extraction, and machine translation.
  • Sequence-to-sequence annotation, an example on the task is machine translation.
  • Selection of work, examples on the tasks are cloze question answering, and dependency parsing.
  • Multiple input processing, an example on the task is question answering.

54. Applying Deep Learning Algorithms in Healthcare

Various deep learning algorithms are used in practical applications in the current situation with the COVID-19 pandemic. Deep learning has shown itself to be useful in healthcare for governments worldwide. These applications provide image-processing capabilities, classification, or clustering, or predict when life will resume as before.
As of the time of writing, it is reported that more than 213 million patients have been diagnosed with COVID-19, and 4.4 million of these cases have passed away, according to the Coronavirus Resource Centre at Johns Hopkins University [68]. This pandemic poses a dire threat to human civilization. In the pre-COVID-19 era, deep learning was not viewed as a sustainable algorithm for health informatics due to the large amount of training data and computational resources it requires, compared to other algorithms that do not require similar efforts and tuning [30]. Deep learning techniques were constantly negatively viewed due to their lack of interpretability [14]. However, COVID-19 has created a need for robust research in order to find the best classification, screening, and diagnostic measures. Search code strategies were used to track the progress of machine learning and deep learning in predicting, detecting, and diagnosing COVID-19. Convolutional neural networks, deep neural networks, and support vector machine algorithms have exhibited accuracies of up to 99% while detecting the virus [69].
In the medical field and healthcare, deep learning was used in detecting and differentiating among COVID-19, viral pneumonia, and healthy chest X-rays in patients using image-processing capabilities [70]. The COVID-DeepNet system has been proposed for determining COVID-19 in chest X-ray (CX-R) images [36]. This system helps radiologists who have experience in understanding the images quickly and accurately [36]. The results taken from two dissimilar methods rely upon the combination of a convolutional deep belief network and deep belief network, trained from the beginning utilizing a big dataset [36]. The developed system appears to offer precision and efficiency and can be utilized to identify COVID-19 by applying early diagnosis. Additionally, this system can be used to follow-up the treatment, with each image taking less than 3 s to be decided upon [36].
Diagnosis, techniques such as Covid-Net CNN, ConoNet CNN, Bayes SqueezeNet, and CoroNet AutoEncoders, have performed superiorly with high accuracies. Other deep learning algorithms have diagnosed COVID-19 after being used on CT-scan datasets, such as the WOA-CNN, CRNet, and CNNs [71].
An entirely automatic system of deep learning for the diagnosis of COVID-19, and analysis of prognosis, has utilized computed tomography [36]. From seven cities or provinces, 5273 patients along with their computed tomography images have been gathered [16]. For pre-training the deep learning system, 4106 patients along with their tomography images have been used, enabling the system to learn the features of the lung [16]. Subsequently, 1266 patients from six cities or provinces have been registered with the intention of training and validating externally the system performance of deep learning [16]. A total of 924 out 1266 patients had COVID-19 (471 were followed-up for > days) [16]. In particular, 342 out of 1266 patients had other pneumonia [16]. The system of deep learning accomplished satisfactory performance in distinguishing COVID-19 from viral and other pneumonia within the four sets of external validation [16]. In addition, the system of deep learning has the ability to group patients into low and high risks whose time of stay at hospital has important dissimilarities [16]. The rapid diagnosis of COVID-19 and determining the patients with high risks can be achieved using deep learning, which can help in enhancing the medical resources and also to help the patients before they will be in critical states [16].
In order to classify the image tissue, [72] utilized SegNet and U-NET as two recognized networks of deep learning. U-NET is a tool of medical segmentation [72], and SegNet is the network of scene segmentation [72]. SegNet and U-NET were used as binary segmentors in order to distinguish between infected and healthy lung tissue [72]. Moreover, the two networks can be used as multi-class segmentors for the purpose of learning the infection within the lung [72]. Every network used seventy-two images for training [72], ten images for validation [72] and eighteen images for testing [72]. The results showed that SegNet is capable to differentiate between healthy and infected tissues. In addition, U-NET provided better results based on multi-class segmentor [72].
Additionally, using models that apply convolutional neural networks and recurring neural networks, applications can be created to predict vaccination patterns in the future [72]. Deterministic and stochastic recurrent neural networks were used to predict the geographic spreading of the active virus using unsupervised learning methods so as to plan vaccine distribution among the USA, as a case study [73].
Machine learning helps governments and health ministries to prepare and schedule the dosages required for the public. In addition, algorithms have helped to forecast the case numbers and mortality statistics in many countries. Deep learning has improved the accuracy of predictions, enabling improved data-driven decisions regarding easing or enforcing lockdowns [74]. A case study was conducted in India that took no external factors that could affect the rate of spread to predict lockdown extension time. A linear regression model was used to predict how long lockdowns should last in order to eradicate COVID-19 from India [67].
The authors of [70] discussed the ways that deep learning assisted in the COVID-19 pandemic and presents guidelines for upcoming research on COVID-19. The authors of [69] provided applications of deep learning in different fields, such as computer vision, natural language processing, epidemiology, and life sciences. The authors of [75] described the differences of the applications of big data and methods for building the tasks for learning. In addition, ([74], p. 19) introduces deep learning’s limitations for application during COVID-19. The limitations are generalization metrics, interpretability, the privacy of data, and using limited labelled data for learning.
Deep learning algorithms can be used to forecast the number of COVID-19 cases and death cases. The multivariate CNN algorithm outperformed the LSTM in terms of validation accuracy and forecasting consistency. CNN has been suggested for long-term forecasting in the absence of seasonality and periodic patterns in time series datasets [73]. The research mentioned that DL techniques have a significant impact on early detection of COVID-19 with high accuracy rate. Most of the studies used deep learning to detect COVID-19 cases in early stage based on different diagnostic techniques. The most widely used techniques are convolutional neural network (CNN) and transfer learning (TL) [76]. The research detailed that the use of AI in COVID-19 investigation can be summarized in terms of clinical image examination, drug design and pandemic prediction against coronavirus. The study revealed that in a considerable number of patients with suspected COVID-19 pneumonia, CT should be examined following CXR, potentially causing impairment in the absence of pre-defined diagnostic work-up criteria. The DNN architectures are built from the ground up instead of applying transfer learning techniques [77]. Deep Learning applications to detect the symptoms of COVID-19, AI based robots to maintain social distancing, Block chain technology to maintain patient records, Mathematical modeling to predict and assess the situation and Big Data to trace the spread of the virus and other technologies. These technologies have immensely contributed to curtailing this pandemic [78]. In this research, they designed a weakly supervised deep learning architecture for fast and fully-automated detection and classification of COVID-19 cases using retrospectively extracted CT images from multiple scanners and multiple centers. It can distinguish COVID-19 cases accurately from CAP (Community Acquired Pneumonia) and NP(Non-Pneumonia) patients. It can also spot the exact position of the lesions or inflammations caused by the COVID-19, and can also provide details about the patient severity in order to guide the following triage and treatment. Experimental findings have indicated that the proposed model achieves high accuracy, precision and promising qualitative visualisation for the lesion detections [79]. This research provide a solution for recognizing pneumonia due to COVID-19 and healthy lungs (normal person) using CXR (chest X-ray) images. They used the state-of-the-art technique, genetic deep learning convolutional neural network (GDCNN). It is trained from the scratch for extracting features for grouping them into COVID-19 and normal images. The proposed method do better compared to other transfer learning techniques. Classification accuracy of 98.84%, the precision of 93%, the sensitivity of 100%, and specificity of 97.0% in COVID-19 prediction is attained [80]. As long as the number of patients is very high and the level of radiological expertise required is low, deep learning-based recommender systems can be of great assistance in diagnosing COVID-19. The study examined four different deep CNN architectures on chest X-ray images of COVID-19 patients for diagnosis recommendation. From all of the models, the Mobilenet model comes out on top. Based on the results, CNN-based architectures have the potential to diagnose COVID-19 correctly. Transfer learning plays a key role in improving detection accuracy. Further fine-tuning of these models may improve their accuracy [81]. In this research they presented a clinical decision support system for the early detection of COVID-19 using deep learning based on chest X-ray images. For this, they developed an architecture made up of three stages. The first stage includes pre-processing of input images followed by data augmentation. The second stage includes feature extraction followed by learning. Finally, the third stage produces the classification and prediction process with a fully connected network of several classifiers. The proposed deep learning algorithm provided an AUC of 0.97 for internal validation and 0.95 for external validation based on the number of chest Xray images with an accuracy of 92.5% and 87.5% respectively [82]. This research addresses how AI provides safe, accurate and efficient imaging solutions in COVID-19 applications. Two imaging techniques, i.e., X-ray and CT, are used to show the effectiveness of AI-empowered medical imaging for COVID-19. Imaging only gives partial information about patients with COVID-19. So, it is important to integrate imaging data with both clinical manifestations and laboratory examination results to help better screening, detection and diagnosis of COVID-19. AI will demonstrate its natural capability in fusing information from these multi-source data, for performing accurate and efficient diagnosis, analysis and follow-up [79]. In this paper, the significance of the AI-driven tools and suitable training and testing models have been discussed. AI-driven tools are required to be implemented from the beginning of data collection, in parallel with the experts in the field, where active learning needs to be implemented. During the decision-making process, multiple data types should be used rather than just one in order to increase confidence. As part of active learning, multitudinal and multimodal data have been discussed [83]. They presented an artificial intelligence (AI) system for rapid COVID-19 detection and extensively analyzed the CTs of COVID-19 based on Artificial Intelligence. They evaluated the system using large datasets consisting of more than 10 thousand CT scans from the COVID-19, influenza-A/B, community acquired pneumonia (CAP), and other subjects. On a test cohort of 3199 scans, the deep convolutional neural network-based system achieved an area under the receiver operating characteristic curve of 97.81%. This AI system outperformed all five radiologists in a reader study by two orders of magnitude when facing more challenging tasks [84]. COVID-19 is a new disease announced on 11 February 2020. The major symptoms of COVID-19 are fever, breathing difficulty, dry cough, headache, runny nose, nasal blockage, body pain, and throat pain [85]. The disease can easily transfer to others through droplets. Respiratory droplets >5–10 µm can spread the virus and are spread easily through direct contact compared to droplet nuclei with particle sizes <5 µm. The transmission of the droplet occurs within 1 m direct contact with a COVID-19-infected person. As of 22 May 202, there were 4,995,996 confirmed cases of coronavirus in over 216 countries and 3,27,821 confirmed deaths [85]. The COVID-19 pandemic has exposed the vulnerability of healthcare services worldwide. There is a clear need to develop computer assisted diagnosis tools to provide rapid and cost-effective screening to identify SARS-CoV-2. CBC, chest X-ray, Polymerase Chain Reaction (PCR), chest CT, and the IgM/IgG combo test are the diagnosis methods that are generally utilized for COVID-19. Among these, CT scans produce the best performance in COVID-19 diagnosis [86]. The artificial intelligence (AI) applications related to COVID-19 are involved in medical imaging, drug development, lung damage delineation, cough sample analysis, etc.
AI algorithms can take CT images of patients with clinical symptoms to diagnose COVID-19 result rapidly. A CNN is used to classify the medical image and diagnose the disease as pneumonia or COVID19. A linear support vector machine, VGG16, and Inception V3 were used in one study [87]. Supervised machine learning algorithms such as SVM, random forest, and I Bayes are mostly utilized to predict the disease easily. The random forest provided the highest accuracy in 9 of 17 studies, with 53% overall. Among these studies, three SVMs produced the best classification for prediction of disease [88]. Automatic investigative systems based on AI and ML devices have been developed to detect the coronavirus accurately and rapidly to protect healthcare workers in direct contact with COVID-19 patients [89]. DeTraC DCNN is utilized to classify chest X-ray images (suspected COVID-19) accurately. It achieved 95.12% accuracy in the finding of coronavirus X-ray images [87]. Extracting CT image features using a deep learning model could provide clinical diagnosis to save time [90]. One of the challenging issues is distinguishing COVID-19 coughing sounds from non-COVID-19 coughing sounds. The AI4COVID19 application is used to record cough sound samples for 2 s. Matching the voice samples with coronavirus patients and non-COVID-19 patients was done with 90% accuracy [91]. Deep learning techniques are utilized for computed tomography (CT) images to distinguish between coronavirus and pneumonia [92]. COVID-19 and pneumonia X-ray image data classes were created using a fuzzy coloring technique for a preprocessing step. The preprocessing datasets were trained with DL models such as SqueezeNet and MobileNetV2. With up to 99.27% accuracy, a support vector machine is used to extract and classify the data features of COVID-19 [93].
Ten convolution neural network techniques were utilized to distinguish between coronavirus and non-coronavirus groups. Among all these networks, Xception and ResNet-101 both achieved high performance area under the curve (AUC) of 0.994. The radiologist’s performance was moderate with an AUC of 0.873 [94].
RestNet-100, a classifier made of a convolutional neural network (RCNN), combined with logistic regression (LR), was used to identify COVID-19. The CT scan image input is given to the Restnet101 deep convolutional neural network model that has 101 layers deep with 33 residual blocks. Then the result is passed to a logistic regression classifier. The logistic regression classifier classifies the result as COVID-19, normal, or pneumonia. The test produces 99.15% accuracy with COVID-19-positive patients. The COVID19 diagnosis has threshold values of 0 and 1. 1 indicates the person is affected with coronavirus or pneumonia, and the result zero indicates the person is not affected with coronavirus. To identify the patient’s condition from the initial stage to a severe stage, the threshold value is utilized for CT image classification. Threshold value 0.5 indicates the initial stage, and 1 indicates a severe condition [86]. One of the latest techniques for COVID-19 diagnosis and classification is the depthwise separable convolution neural network (DWS-CNN) with a deep support vector machine (DSVM) enabled by the Internet of Things (IoT). The procedure consists of two stages, training and testing. It starts with collecting data from patients using IoT devises, and sending them to the cloud server via 5G networks. The aim of DWS-CNN is to determine the binary and multiple class labels of COVID-19 using CXR images. CNN is involved as the basis of new models used for predicting diseases, due to its efficiency at detecting structural abnormalities. This model was developed to improve the accuracy of COVID-19 detection from chest X-ray scans, and it minimizes manual interaction dependent on radiologists [95].
Parkinson’s disease (PD) is an illness which could be diagnosed and detected by a CNN from drawing movements, using the Digitized Graphics Tablet dataset. This CNN includes two parts: feature extraction (convolutional layers) and classification (fully connected layers). The fast Fourier’s transform has the range of frequencies between 0 and 25 Hz, which are used as inputs to the CNN [96][97][98]. Skin cancer is one of the most widespread causes of death, and early detection could increase the survival rate to 90%. For that purpose, deep convolutional neural networks (DCNNs) have been developed and applied to classify the color images of skin cancer into three types: melanoma, atypical nevus, and common nevus [99][100][101].

References

  1. Deep Learning with Python|The All You Need to Know Tutorial, Edureka, 19 February 2019. Available online: https://www.edureka.co/blog/deep-learning-with-python/ (accessed on 31 August 2021).
  2. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133.
  3. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain|Semantic Scholar. Available online: https://www.semanticscholar.org/paper/The-perceptron%3A-a-probabilistic-model-for-storage-Rosenblatt/5d11aad09f65431b5d3cb1d85328743c9e53ba96 (accessed on 31 August 2021).
  4. Block, H.D. A review of perceptrons: An introduction to computational geometry≓. Inf. Control 1970, 17, 501–522.
  5. Werbos, P.J. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences; Harvard University: Cambridge, MA, USA, 1975.
  6. Fukushima, K. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Netw. 1988, 1, 119–130.
  7. Jordan, M.I. Serial Order: A parallel Distributed Processing Approach; University of California: San Diego, CA, USA, 1986.
  8. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324.
  9. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554.
  10. Hinton, G.E. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507.
  11. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444.
  12. Belanche, L.A. Some Applications of MLPs Trained with Backpropagation. p. 16. Available online: https://www.cs.upc.edu/~belanche/Docencia/apren/2009-10/Excursiones/Some%20Applications%20of%20Bprop.pdf (accessed on 1 September 2021).
  13. Simidjievski, N.; Bodnar, C.; Tariq, I.; Scherer, P.; Andres Terre, H.; Shams, Z.; Shams, Z.; Jamnik, M.; Liò, P. Variational Autoencoders for Cancer Data Integration: Design Principles and Computational Practice. Front. Genet. 2019, 10, 1205.
  14. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief. Bioinform. 2018, 19, 1236–1246.
  15. Jeyaraj, P.R.; Nadar, E.R.S. Deep Boltzmann machine algorithm for accurate medical image analysis for classification of cancerous region. Cogn. Comput. Syst. 2019, 1, 85–90.
  16. Abdulrahman, S.A.; Salem, A.M. A efficient deep belief Wang, S.; Zha, Y.; Li, W.; Wu, Q.; Li, X.; Niu, M.; Wang, M.; Qiu, X.; Li, H.; Yu, H.; et al. A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. Eur. Respir. J. 2020, 56, 2000775.
  17. network for Detection of Corona Virus Disease COVID-19. ASPG 2020, 2, 5–13.
  18. Yue, L.; Gong, X.; Li, J.; Ji, H.; Li, M.; Nandi, A.K. Hierarchical Feature Extraction for Early Alzheimer’s Disease Diagnosis. IEEE Access 2019, 7, 93752–93760.
  19. Barreira, C.M.; Bouslama, M.; Haussen, D.C.; Grossberg, J.A.; Baxter, B.; Devlin, T.; Frankel, M.; Nogueira, R.G. Abstract WP61: Automated Large Artery Occlusion Detection IN Stroke Imaging—ALADIN Study. Stroke 2018, 49 (Suppl. S1), AWP61.
  20. Cireşan, D.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2013; Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8150, pp. 411–418.
  21. Liu, Y.; Gadepalli, K.; Norouzi, M.; Dahl, G.E.; Kohlberger, T.; Boyko, A.; Venugopalan, S.; Timofeev, A.; Nelson, P.Q.; Corrado, G.S.; et al. Detecting Cancer Metastases on Gigapixel Pathology Images. arXiv 2017, arXiv:170302442Cs. Available online: http://arxiv.org/abs/1703.02442 (accessed on 29 August 2021).
  22. Beck, A.H.; Sangoi, A.R.; Leung, S.; Marinelli, R.J.; Nielsen, T.O.; Van De Vijver, M.J.; West, R.B.; van de Rijn, M.; Koller, D. Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival. Sci. Transl. Med. 2011, 3, 108ra113.
  23. Towards the Swift Prediction of the Remaining Useful Life of Lithium-Ion Batteries with End-to-End Deep Learning—ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/S0306261920311429?via%3Dihub (accessed on 14 January 2022).
  24. Chen, Z.; Chen, L.; Shen, W.; Xu, K. Remaining Useful Life Prediction of Lithium-Ion Battery Via a Sequence Decomposition and Deep Learning Integrated Approach. IEEE Trans. Veh. Technol. 2021, 414, 245–254.
  25. Islam, M.M.; Karray, F.; Alhajj, R.; Zeng, J. A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19). IEEE Access 2021, 9, 30551–30572.
  26. Ortiz-Echeverri, C.J.; Salazar-Colores, S.; Rodríguez-Reséndiz, J.; Gómez-Loenzo, R.A. A New Approach for Motor Imagery Classification Based on Sorted Blind Source Separation, Continuous Wavelet Transform, and Convolutional Neural Network. Sensors 2019, 19, 4541.
  27. Gutierrez-Villalobos, J.M.; Rodriguez-Resendiz, J.; Rivas-Araiza, E.A.; Martínez-Hernández, M.A. Sensorless FOC Performance Improved with On-Line Speed and Rotor Resistance Estimator Based on an Artificial Neural Network for an Induction Motor Drive. Sensors 2015, 15, 15311.
  28. Cruz-Miguel, E.E.; García-Martínez, J.R.; Rodríguez-Reséndiz, J.; Carrillo-Serrano, R.V. A New Methodology for a Retrofitted Self-tuned Controller with Open-Source FPGA. Sensors 2020, 20, 6155.
  29. Rodríguez-Abreo, O.; Rodríguez-Reséndiz, J.; Fuentes-Silva, C.; Hernández-Alvarado, R.; Falcón MD CP, T. Self-Tuning Neural Network PID With Dynamic Response Control. IEEE Access 2021, 9, 65206–65215.
  30. Benyelles, F.Z.; Sekkal, A.; Settouti, N. Content Based COVID-19 Chest X-ray and CT Images Retrieval framework using Stacked Auto-Encoders. In Proceedings of the 2nd International Workshop on Human-Centric Smart Environments for Health and Well-being (IHSH), Boumerdes, Algeria, 9–10 February 2020.
  31. Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.Z. Deep learning for health informatics. IEEE J. Biomed. Health Inform. 2017, 21, 4–21.
  32. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252.
  33. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? arXiv 2014, arXiv:14111792Cs. Available online: http://arxiv.org/abs/1411.1792 (accessed on 29 August 2021).
  34. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780.
  35. Chou, K.; Ramsundar, B.; Robicquet, A. A Guide to Deep Learning in Healthcare. Available online: researchgate.net (accessed on 10 September 2021).
  36. Al-Waisy, A.S.; Mohammed, M.A.; Al-Fahdawi, S.; Maashi, M.S.; Garcia-Zapirain, B.; Abdulkareem, K.H.; Mostafa, S.A.; Le, D.N. COVID-DeepNet: Hybrid Multimodal Deep Learning System for Improving COVID-19 Pneumonia Detection in Chest X-ray Images. Comput. Mater. Contin. 2021, 67, 2409–2429.
  37. Reyes, L.M.S.; Rodriguez, J.; Avecilla-Ramírez, G.N.; García, L. Impact of EEG Parameters Detecting Dementia Diseases: A Systematic Review. IEEE Access 2021, 9, 78060–78074.
  38. Atinza, R. Advanced Deep Learning with Keras; Packt Publishing: Birmingham, UK, 2018.
  39. Pumsirirat, A.; Yan, L. Credit Card Fraud Detection using Deep Learning based on Auto-Encoder and Restricted Boltzmann Machine. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 18–25.
  40. Zrira, N.; Khan, H.A.; Bouyakhf, E.H. Discriminative Deep Belief Network for Indoor Environment Classification Using Global Visual Features. Cogn. Comput. 2018, 10, 437–453.
  41. Keras for Beginners: Implementing a Convolutional Neural Network—victorzhou.com. Available online: https://victorzhou.com/blog/keras-cnn-tutorial/ (accessed on 5 September 2021).
  42. Keras for Beginners: Implementing a Recurrent Neural Network—victorzhou.com. Available online: https://victorzhou.com/blog/keras-rnn-tutorial/ (accessed on 5 September 2021).
  43. Wadhwa, P.; Aishwarya; Tripathi, A.; Singh, P.; Diwakar, M.; Kumar, N. Predicting the time period of extension of lockdown due to increase in rate of COVID-19 cases in India using machine learning. Mater. Today Proc. 2021, 37, 2617–2622.
  44. An Introduction to Backpropagation Algorithm and How It Works? Available online: https://www.mygreatlearning.com/blog/backpropagation-algorithm/ (accessed on 31 August 2021).
  45. Autoencoders Tutorial|What Are Autoencoders? Edureka. 12 October 2018. Available online: https://www.edureka.co/blog/autoencoders-tutorial/ (accessed on 31 August 2021).
  46. Understanding Variational Autoencoders (VAEs)|by Joseph Rocca|Towards Data Science. Available online: https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73 (accessed on 31 August 2021).
  47. Variational Autoencoders, Jeremy Jordan. 19 March 2018. Available online: https://www.jeremyjordan.me/variational-autoencoders/ (accessed on 31 August 2021).
  48. A Comprehensive Guide to Convolutional Neural Networks—The ELI5 Way|by Sumit Saha|Towards Data Science. Available online: https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53 (accessed on 31 August 2021).
  49. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability|Wiley, Wiley.com. Available online: https://www.wiley.com/en-gb/Recurrent+Neural+Networks+for+Prediction%3A+Learning+Algorithms%2C+Architectures+and+Stability-p-9780471495178 (accessed on 16 December 2021).
  50. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2, Cambridge, MA, USA, 8–13 December 2014; pp. 2672–2680.
  51. Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic Routing Between Capsules. Adv. Neural Inf. Process. Syst. 2017, 30, 11.
  52. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is All you Need. In Advances in Neural Information Processing Systems, Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; MIT: Cambridge, MA, USA; Volume 30, Available online: https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html (accessed on 16 December 2021).
  53. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186.
  54. Galassi, A.; Lippi, M.; Torroni, P. Attention in Natural Language Processing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4291–4308.
  55. Garg, S. Demystifying ‘Matrix Capsules with EM Routing’. Medium. 23 November 2018. Available online: https://towardsdatascience.com/demystifying-matrix-capsules-with-em-routing-part-1-overview-2126133a8457 (accessed on 22 December 2021).
  56. Sharma, A. adityashrm21 Demystifying Restricted Boltzmann Machines, Aditya Sharma. 2 October 2018. Available online: https://adityashrm21.github.io/https://adityashrm21.github.io/Restricted-Boltzmann-Machines/ (accessed on 31 August 2021).
  57. Pelli, D.G. Crowding: A cortical constraint on object recognition. Curr. Opin. Neurobiol. 2008, 18, 445–451.
  58. Afshar, P.; Mohammadi, A.; Plataniotis, K.N. Brain Tumor Type Classification via Capsule Networks. arXiv 2018, arXiv:180210200. Available online: http://arxiv.org/abs/1802.10200 (accessed on 16 December 2021).
  59. Wang, Y.; Sun, A.; Han, J.; Liu, Y.; Zhu, X. Sentiment Analysis by Capsules. In Proceedings of the 2018 World Wide Web Conference, Geneva, Switzerland, 23–27 April 2018; pp. 1165–1174.
  60. LaLonde, R.; Bagci, U. Capsules for Object Segmentation. arXiv 2018, arXiv:180404241. Available online: http://arxiv.org/abs/1804.04241 (accessed on 16 December 2021).
  61. Miraoui, I. A No-Frills Guide to Most Natural Language Processing Models—The LSTM Age—Seq2Seq, InferSent…, Medium. 20 February 2020. Available online: https://towardsdatascience.com/a-no-frills-guide-to-most-natural-language-processing-models-the-lstm-age-seq2seq-infersent-3af80e77687 (accessed on 16 December 2021).
  62. Akbik, A.; Blythe, D.; Vollgraf, R. Contextual String Embeddings for Sequence Labeling. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, NM, USA, 20–26 August 2018; pp. 1139–1638. Available online: https://aclanthology.org/C18-1139 (accessed on 16 December 2021).
  63. MarketMuse, Google BERT Update and What You Should Know, MarketMuse Blog. Available online: https://blog.marketmuse.com/google-bert-update/ (accessed on 16 December 2021).
  64. Transformers In NLP|State-of-the-Art-Models, Analytics Vidhya. 19 June 2019. Available online: https://www.analyticsvidhya.com/blog/2019/06/understanding-transformers-nlp-state-of-the-art-models/ (accessed on 16 December 2021).
  65. AlQahtani, H.; Kavakli-Thorne, M.; Kumar, G. Applications of Generative Adversarial Networks (GANs): An Updated Review. Arch. Comput. Methods Eng. 2021, 28, 525–552.
  66. Deep Learning Next Step: Transformers and Attention Mechanism, KDnuggets. Available online: https://www.kdnuggets.com/deep-learning-next-step-transformers-and-attention-mechanism.html/ (accessed on 16 December 2021).
  67. Alammar, J. The Illustrated Transformer. Available online: https://jalammar.github.io/illustrated-transformer/ (accessed on 16 December 2021).
  68. COVID-19 Map, Johns Hopkins Coronavirus Resource Center. Available online: https://coronavirus.jhu.edu/map.html (accessed on 31 August 2021).
  69. Rehman, A.; Iqbal, M.A.; Xing, H.; Ahmed, I. COVID-19 Detection Empowered with Machine Learning and Deep Learning Techniques: A Systematic Review. Appl. Sci. 2021, 11, 3414.
  70. Shorten, C.; Khoshgoftaar, T.M.; Furht, B. Deep Learning applications for COVID-19. J. Big Data 2021, 8, 18.
  71. Kumari, I.S.; Ranjith, E.; Gujjar, A.; Narasimman, S.; Aadil Sha Zeelani, H.S. Comparative analysis of deep learning models for COVID-19 detection. Glob. Transit. Proc. 2021, 2, 559–565.
  72. Nabi, K.N.; Tahmid, M.T.; Rafi, A.; Kader, M.E.; Haider, M.A. Forecasting COVID-19 cases: A comparative analysis between recurrent and convolutional neural networks. Results Phys. 2021, 24, 104137.
  73. Davahli, M.R.; Karwowski, W.; Fiok, K. Optimizing COVID-19 vaccine distribution across the United States using deterministic and stochastic recurrent neural networks. PLoS ONE 2021, 16, e0253925.
  74. Omran, N.F.; Ghany, S.F.A.; Saleh, H.; Ali, A.A.; Gumaei, A.; Al-Rakhami, M. Applying Deep Learning Methods on Time-Series Data for Forecasting COVID-19 in Egypt, Kuwait, and Saudi Arabia. Complexity 2021, 2021, e6686745.
  75. Saood, A.; Hatem, I. COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet. BMC Med. Imaging 2021, 21, 19.
  76. AlZubaidi, M.; Zubaydi, H.D.; Bin-Salem, A.A.; Abd-Alrazaq, A.A.; Ahmed, A.; Househ, M. Role of Deep Learning in Early Detection of COVID-19: Scoping Review; Elsevier: Amsterdam, The Netherlands, 2021.
  77. Bhattacharya, E.; Bhattacharya, D. A Review of Recent Deep Learning Models in COVID-19 Diagnosis. Eur. J. Eng. Technol. Res. 2021, 6, 10–15.
  78. The Impact of Artificial Intelligence, Blockchain, Big Data and Evolving Technologies in Coronavirus Disease—2019 (COVID-19) Curtailment|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9215294 (accessed on 8 October 2021).
  79. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2021, 14, 4–15.
  80. Prediction of COVID-19 Using Genetic Deep Learning Convolutional Neural Network (GDCNN)|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9201297 (accessed on 12 October 2021).
  81. Sethi, R.; Mehrotra, M.; Sethi, D. Deep Learning based Diagnosis Recommendation for COVID-19 using Chest X-Rays Images. In Proceedings of the 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 15–17 July 2020; pp. 1–4.
  82. Qjidaa, M.; Ben-Fares, A.; Mechbal, Y.; Amakdouf, H.; Maaroufi, M.; Alami, B.; Qjidaa, H. Development of a clinical decision support system for the early detection of COVID-19 using deep learning based on chest radiographic images. In Proceedings of the 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020; pp. 1–6.
  83. Santosh, K.C. AI-Driven Tools for Coronavirus Outbreak: Need of Active Learning and Cross-Population Train/Test Models on Multitudinal/Multimodal Data. J. Med. Syst. 2020, 44, 93.
  84. Jin, C.; Chen, W.; Cao, Y.; Xu, Z.; Tan, Z.; Zhang, X.; Deng, L.; Zheng, C.; Zhou, J.; Shi, H.; et al. Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat. Commun. 2020, 11, 5088.
  85. Coronavirus Disease (COVID-19). Available online: https://www.scienceopen.com/book?vid=68f1ca37-7ab9-4c32-9c9d-cf8013f13b38 (accessed on 12 October 2021).
  86. Kavitha, M.; Jayasankar, T.; Venkatesh, P.M.; Mani, G.; Bharatiraja, C.; Twala, B. COVID-19 Disease Diagnosis using Smart Deep Learning Techniques. J. Appl. Sci. Eng. 2021, 24, 271–277.
  87. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864.
  88. Uddin, S.; Khan, A.; Hossain, M.E.; Moni, M.A. Comparing different supervised machine learning algorithms for disease prediction. BMC Med. Inform. Decis. Mak. 2019, 19, 281.
  89. Alimadadi, A.; Aryal, S.; Manandhar, I.; Munroe, P.B.; Joe, B.; Cheng, X. Artificial intelligence and machine learning to fight COVID-19. Physiol. Genom. 2020, 52, 200–202.
  90. Wang, S.; Kang, B.; Ma, J.; Zeng, X.; Xiao, M.; Guo, J.; Cai, M.; Yang, J.; Li, Y.; Meng, X.; et al. A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19). Eur. Radiol. 2021, 31, 6096–6104.
  91. Imran, A.; Posokhova, I.; Qureshi, H.N.; Masood, U.; Riaz, M.S.; Ali, K.; John, C.N.; Hussain, I.; Nabeel, M. AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app. Inform. Med. Unlocked 2020, 20, 100378.
  92. Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Ni, Q.; Chen, Y.; Su, J.; et al. A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia. Eng. Beijing China 2020, 6, 1122–1129.
  93. Toğaçar, M.; Ergen, B.; Cömert, Z. COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches. Comput. Biol. Med. 2020, 121, 103805.
  94. Ardakani, A.A.; Kanafi, A.R.; Acharya, U.R.; Khadem, N.; Mohammadi, A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput. Biol. Med. 2020, 121, 103795.
  95. Le, D.-N.; Parvathy, V.S.; Gupta, D.; Khanna, A.; Rodrigues, J.J.P.C.; Shankar, K. IoT enabled depthwise separable convolution neural network with deep support vector machine for COVID-19 diagnosis and classification. Int. J. Mach. Learn. Cybern. 2021, 12, 3235–3248.
  96. Gil-Martín, M.; Montero, J.M.; San-Segundo, R. Parkinson’s Disease Detection from Drawing Movements Using Convolutional Neural Networks. Electronics 2019, 8, 907.
  97. Khatamino, P.; Canturk, I.; Ozyilmaz, L. A Deep Learning-CNN Based System for Medical Diagnosis: An Application on Parkinson’s Disease Handwriting Drawings. In Proceedings of the 2018 6th International Conference on Control Engineering & Information Technology (CEIT), Istanbul, Turkey, 25–27 October 2018; p. 6.
  98. Gallicchio, C.; Micheli, A.; Pedrelli, L. Deep Echo State Networks for Diagnosis of Parkinson’s Disease. arXiv 2018, arXiv:180206708. Available online: http://arxiv.org/abs/1802.06708 (accessed on 17 October 2021).
  99. Hosny, K.; Kassem, M.; Fouad, M. Skin Cancer Classification Using Deep Learning and Transfer Learning; IEEE: Cairo, Egypt, 2018.
  100. Codella, N.; Cai, J.; Abedini, M.; Garnavi, R.; Halpern, A.; Smith, J.R. Deep Learning, Sparse Coding, and SVM for Melanoma Recognition in Dermoscopy Images. In Machine Learning in Medical Imaging; MLMI 2015. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9352.
  101. Mishra, N.K.; Celebi, M.E. An Overview of Melanoma Detection in Dermoscopy Images Using Image Processing and Machine Learning. arXiv 2016, arXiv:160107843. Available online: http://arxiv.org/abs/1601.07843 (accessed on 17 October 2021).
More
ScholarVision Creations