Adversarial Attacks on Medical Imaging: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor:

One of the most important challenges in the computer vision (CV) area is Medical Image Analysis in which DL models process medical images—such as magnetic resonance imaging (MRI), X-ray, computed tomography (CT), etc.—using convolutional neural networks (CNN) for diagnosis or detection of several diseases. The proper function of these models can significantly upgrade the health systems. However, recent studies have shown that CNN models are vulnerable under adversarial attacks with imperceptible perturbations. 

  • deep learning
  • adversarial attack
  • medical image analysis
  • computer vision
  • convolutional neural networks

1. Introduction

Adversarial attacks have raised concerns in the research community about the safety of deep neural networks and how we can trust our lives on them when they can be fooled easily. Adversarial examples can be created either we know the parameters of the DL model (white box attacks) or not (black box attacks) [1]. Usually, the noise that the attackers add in a clean image is not random but is computed by optimizing the input to maximize the prediction error. However, there are random noises too, which are implemented when the model’s parameters are unknown. Furthermore, there is a phenomenon that is called “adversarial transferability” and this means that adversarial examples which are created from one model can be effective on another model [2]. In addition to this, a study from Kurakin et al. [3] proved that adversarial examples are able to fool a model in the real-world when an adversarial example is printed as is shown in Figure 1.

Figure 1. Adversarial attacks on printed out images [3].
There are two categories of defenses for decreasing the success rate of adversarial attacks, data level defense and algorithmic level defense. In the first category belong the adversarial training [4][5], preprocessing and postprocessing methods such as feature squeezing [6], magnet method [7]. In adversarial training, the model is trained with adversarial examples, which are correctly labeled. In the second category, some methods modify the model’s architecture, classifier and capacity [8]. However, these techniques are not always effective as most of them work with specific kinds of attacks either white box or black box. Moreover, many of them sacrifice accuracy on clean images.
At the same time, most doctors and researchers in the field of medicine deny trusting these models because they are treated as ‘black-boxes’ since we cannot explain how these models make a decision. This happens because a wrong decision in medicine has very high value as it is about human lives. Adversarial examples enhance the doctors’ view due to the efficacy of attacks proving that these models are not able to deal with real-world problems. Although adversarial examples seem unrealistic in medical image analysis, there are some serious motivations that we should take into consideration. For example, attackers can perturbate test reports in order to receive medical compensation [9].

2. Medical Image Analysis

Medical image analysis aims at processing the human body through different image modalities for medical reasons like diagnosis, treatments, and health monitoring. The evolution of deep neural networks in the field of computer vision solves problems that classical image processing techniques performed poorly. These solutions have been widely applied in medical imaging because these networks have shown that they are the best choice for dealing with complex and high dimensional data such as medical images. The usage of computer vision in medicine is quite significant as it offers high rates of successful earlier diagnosis, which is crucial for reducing mortality rates. There are several tasks on medical image analysis, which deep learning deals with, with the most important being classification or diagnosis, detection and segmentation.

2.1. Classification—Diagnosis

A major category of applying deep learning in medical image analysis is classification or computer-aided-diagnosis (CAD) in which images are inputs and the DL models classify the images into several classes. Usually, models predict if a patient has a disease or not.

2.2. Detection

Detection is an additional important target of medical image analysis. Accurate and fast detection of anatomical or pathological object localization such as organs and landmarks is quite significant for image registration and segmentation tasks [10][11].

2.3. Segmentation

Segmentation in medical imaging refers to extracting specific parts of a medical image such as cells, tumors, organs to be analyzed in detail [10]. In addition, the segmentation of these parts allows us to analyze clinical parameters like volume and shape [12].

3. General Adversarial Examples

An adversarial example is an input sample in which it has been added an imperceptible noise so that it can be misclassified. A characteristic example is presented in Figure 2 where an attack has been applied to a deep learning model [5] leading to a wrong classification with high confidence. Szegedy et al. [4] were the first authors who investigated adversarial examples and they concluded that the success of this attack is due to the lack of generalization in the low probability space of data. However, some later works [5][8] have shown that even linear models are vulnerable too and an increase to the model’s capacity improves its robustness to these attacks. According to Hu et al. [13], it is important to study why adversarial examples exist and to better understand deep learning models in order to create more robust models. Attacks can be divided into three categories depending on the knowledge of adversaries. The first category is the white-box attack in which adversaries know everything about the model such as architecture and parameters. The second category is named grey-box attack and the adversaries know the structure of the model but not the parameters. Lastly, in the third category, adversaries know nothing about the model’s structure and parameters. In addition to this, there are targeted and untargeted attacks. In the former, attackers want to misclassify the input sample in a specific class, while in the latter they just want the sample data to be misclassified. There are numerous adversarial attacks and defenses [14] but none of these defenses is a panacea for all types of attacks.

Figure 2. Prediction before and after attack [5].

3.1. Adversarial Attacks

FGSM (fast gradient sign method) [5] was the first proposed adversarial attack. FGSM is a white-box attack and produces adversarial examples for computer vision systems. This method extracts the adversarial gradient and decreases or increases the value of pixels so that the loss function increases. It perturbs a clean sample for a one-step update along the direction of gradient descend. This attack is formulated as
where x is the input image, y is the label and θ represents the weights of the model. Moreover, ϵ is the magnitude of perturbation, J(θ,x,y) is the gradient loss, sign (·) is the sign function and ∇x(·) is the gradient w.r.t. x.
BIM (basic iterative method) or I-FGSM [2] is an iterative and improving method of FGSM. It performs FGSM with a value ϵ and updates its value with a small perturbation for T iterations until the image is misclassified. This method is formulated as 
where αΤ = ϵ and the α is the magnitude of the perturbation for each iteration.
PGD (projected gradient descent) [8] is a generalization of BIM but without the constraint αΤ = ϵ. Perturbations are constrained by projecting adversarial samples from each iteration into ϵ −L∞ or ϵ −L2 neighbor of the clean image.
C&W (Carlini & Wagner) [15] is another state-of-the-art attack that consists of three methods, C & W∞, C & W2  and C & W0, which minimize L∞, L2 and L0 norm respectively in order to compute the perturbation’s value.
JSMA (Jacobian-based saliency map attack) [16] is an iterative method that affects the value of a few pixels and it changes the value of one pixel in every iteration while the rest are unchanged. In this way, the saliency map is computed. Then the region with the most effective perturbation is selected and this region is perturbed in a clean image.
UAP (universal adversarial perturbation) [17] is an attack that creates perturbation for all images in a dataset trying to find the optimal perturbation that misclassifies most of the data points.
DAG (dense adversary generation) [18] is a black-box method that creates adversarial samples for object detection and semantic segmentation tasks.

3.2. Adversarial Defenses

In this section, some of the state-of-the-art defenses that are used to mitigate the phenomenon of adversarial attacks are discussed.
Adversarial training is one of the most widely used defenses in which a model is trained with adversarial samples so that can be more robust. This method is a min-max game that is formulated as
where hθ is the DNN function, xi is the adversarial example of x0i, (hθ(xi),yi) is the loss function on the adversarial example (xi,yi), and ϵ is the maximum perturbation constraint.
Ensemble adversarial training is another effective method, which is developed for black-box attacks. Adversarial training is an effective method but the used individual models are vulnerable to black-box attacks as they can defend only attacks in which they are trained. Tramèr et al. [19] implemented ensemble adversarial training to compromise this phenomenon. They trained neural networks with adversarial samples from several methods such as FGSM and PGD so that the model has diversity on training samples.
There are numerous defense methods [14] such as randomization, which aims to randomize the adversarial samples [20]. Another method is denoising that tries to remove the perturbations from an input [6]. Some other are the weight-sparse DNNs [21], KNN-based defenses [22], Bayesian model-based defenses [23], and consistency-based defenses [24]. However, there are also detection methods that detect an adversarial sample and reject it before entering the model as input [25][26].

4. Adversarial Medical Image Analysis

4.1. Existing Adversarial Attacks on Medical Images

Paschali et al. [9] studied the effects of adversarial attacks on brain segmentation and skin lesion classification. For the classification task, InceptionV3, InceptionV4 [27], and MobileNet [28] models have been used, while SegNet [29], U-Net and DenseNet [30] were used for segmentation task. Experiments showed that InceptionV3 and DenseNet were the most robust models for classification and segmentation tasks respectively. The authors demonstrated that the robustness of a model is correlated with its depth for classification while for segmentation, dense blocks and skip connections increase its efficacy. The adversarial samples were imperceptibly as the SSIM was 0.97–0.99. Wetstein et al. [31] studied the factors that affect the efficacy of adversarial attacks. Their results show that the value of perturbation is correlated with the efficacy and perceptibility of the attacks. In addition, pre-training models enhance the adversarial transferability and finally, the performance of an attack can be reduced when there is inequality of data/model in target and attacker. Finlayson et al. [32] used PGD white and black box attacks on fundoscopy, dermoscopy, and chest X-ray images, using a pre-trained ResNet50 model. The accuracy of the model was dramatically decreased in both cases.

In Table 1 existing attacks implemented for medical images are summarized. The performance degradation column has shown that some attacks can dramatically reduce model’s accuracy. These attacks were tested only in classification and segmentation tasks. FGSM and PGD were the most used methods and PGD seems to be the most efficient. Moreover, the most of experiments were carried out in MRI, Dermoscopy and X-ray images. It is worth noting that the “dash” symbol in Table 1 implies that the authors did not provide results in the form of a percentage error.
Table 1. Overview of existing adversarial attacks on medical images.
Reference Attacks Models Modality Task Performance Degradation (%)
[9] FGSM, DF, JSMA Inception, MobileNet, SegNet, U-Net, DenseNet Dermoscopy, MRI Classification, Segmentation 6–24%/19–40%
[32] PGD ResNet50 Fundoscopy, Dermoscopy, X-ray Classification 50–100%
[33] UAP U-Net MRI Segmentation Up to 65%
[34] UAP DNN, Hybrid DNN MRI Classification Not provided
[35] FGSM, PGD VGG16, MobileNet Dermoscopy Classification Up to 75%
[36] FGSM, PGD VGG11, U-Net X-ray, MRI Classification, Segmentation Up to 100%
[37] FGSM, One-pixel attack CNN CT scans Classification 28–36%/2–3%
[38] FGSM, VAT, Noise -based attack CNN MRI Classification 69%/34%/24%
[39] I-FGSM CNN, Hybrid lesion-bassed model Fundoscopy Classification 45%/0.6%
[40] PGD Inception V3 X-ray, Histology Classification Up to 100%
[41] FGSM, I-FGSM ResDSN Coarse CT scans Segmentation 86%
[42] Image Dependent Perturbation DenseNet201 Dermoscopy Classification 17%
[43] UAP VGGNets, InceptionResNetV2, ResNet50, DenseNets Dermoscopy, Fundoscopy, X-ray Classification Up to 72%
[44] UAP COVIDNet X-ray Classification Up to 45%
[45] FGSM, PGD, C&W, BIM ResNet50 X-ray, Dermoscopy, Fundoscopy Classification Up to 100%
[46] FGSM VGG-16, InceptionV3 CT scans, X-ray Classification Up to 90%
[47] PGD Similar to U-Net X-ray Segmentation Up to 100%
[48] FGSM Custom CNN Mammography Classification Up to 30%

4.2. Adversarial Attacks for Medical Images

Byra et al. [49] proposed an attack method on ultra-sound (US) images for fatty liver. US images are reconstructed from radio-frequency signals, and authors applied a zeroth-order optimization attack [50] on the reconstruction method. The experiments were performed with the InceptionResNetV2 model and the attack achieved a 48% reduction in the model’s accuracy. Ozbulak et al. [51] proposed a targeted attack for medical image segmentation, which is named adaptive segmentation mask attack (ASMA). The proposed attack creates imperceptible samples for most parts and offers high intersection-over-union (IoU) degradation. For the experiments, they used the U-Net model because it is one of the most known models for medical image segmentation. Glaucoma optic disk segmentation [52] and ISIC skin lesion segmentation [53] datasets were used.
A very interesting study was done by Kugler et al. [54] who investigated physical attacks on skin images. They used the HAM10000 dataset for training and the PADv1 dataset for attacking. The perturbations, in this case, were dots and lines with pen or acrylic (Figure 3). The model that they trained were ResNet, InceptionV3, InceptionResNetV2, Xception, and MobileNet. In contrast with digital attacks, physical attacks have a small difference in confidence compared to clean images. The most robust networks were Xception and InceptionResNet. Finally, the authors claimed that the attacks’ consequences are not statistical outliers but are related to the architectures and training procedures.
Figure 3. The (a) image has some lines with a pen and the (b) image is clean.
Table 2 shows all the attacks that have been created exclusively for medical images. Some of these methods use adversarial attacks in order to make medical models more robust, while others aim to decrease the efficacy of medical models. If we compare Table 2 with Table 1 we can conclude that medical adversarial attacks are not as strong as ordinary attacks.
Table 2. Overview of medical adversarial attacks.
Reference Attack Name Models Modality Task Performance Degradation (%)
[49] Fatty Liver Attack InceptionResNetV2 Ultrasound Classification 48%
[51] ASMA U-Net Fundoscopy, Dermoscopy Segmentation 98% success rate on targeted prediction
[55] Multi-organ Segmentation Attack U-Net CT scans Segmentation Up to 85%
[56] AdvSBF ResNet50, MobileNet, DensNet121 X-ray Classification Up to 39%
[54] Physical World Attacks ResNet, InceptionV3, InceptionResNetV2, MobileNet, Xception Dermoscopy Classification Up to 60%
[57] HFC VGG16, ResNet 50 Fundoscopy, X-ray All tasks Up to 99.5%
[58] MSA U-Net, R2U-Net, Attention U-Net, Attention R2U-Net Fundoscopy, Dermoscopy Segmentation 98% success rate on targeted prediction
[59] SMIA ResNet, U-Net, Custom CNNs Fundoscopy, Endoscopy, CT-scans Classification Segmentation Up to 27%

4.3. Defenses—Attack Detection

Wu et al. [60] have studied the classification of diabetic retinopathy with adversarial training. They used ResNet32 with the PGD method for generating adversarial samples. Adversarial training improved significantly the model’s efficacy under attack. He et al. [61] proposed a non-local context encoding network (NLCEN), which defends against adversarial attacks on medical image segmentation using contextual information of biomedical images. This network is based on ResNet and feature pyramid network (FPN) in combination with non-local context encoder. The experiments have been performed in JSRT and ISBI datasets with Iterative FGSM attack. The model requires 2 and 4 h of training and testing for JSRT and ISBI, respectively. NLCEN has been compared with state-of-the-art models such as, U-Net, InvertNet [62], SLSDeep [63], NWCN [64], and CDNN [65] by presenting the best accuracy of all. Furthermore, this method retains high accuracy even under attacks with big values of perturbation. Taghanaki et al. [66] studied adversarial examples on chest X-ray by trying to implement average pooling instead of max pooling. They used InceptionResNetV2 and Nasnet Large with 10 different attacks, which are divided into three categories, gradient-based, score-based, and decision-based attacks. The results showed that gradient-based attacks, fooled efficiently the models even with average pooling, but it provides an improvement in score-based and decision-based attacks.
Table 3 summarizes all the defense and attack detection methods with the corresponding tasks, modalities and models. We observe that some methods provide significant protection against attacks while others simply reduce the success rate of an attack. At the same time, attack detection methods, detect adversarial samples with very high accuracy. For example, the studies [25][67] detect adversarial samples in several images and implemented in medical adversarial samples in [45].
Table 3. Overview of defense and attack detection methods.
References Tested Attacks Models Modality Task Performance
[45][25][67] FGSM, BIM, PGD, C&W ResNet50 X-ray, Dermoscopy, Fundoscopy Classification Detects adversarial example with up to 100% accuracy
[68] FGSM CNN MRI Segmentation Improves baseline methods up to 1.5%
[69] PGD 3D ResNets CT scans Classification Improves baseline methods up to 10% and 35% in perturbed data
[70] FGSM, JSMA CNN CT scans, MRI All tasks Improves baseline methods up to 2%
[71] VAT UNet MRI Segmentation Improves baseline methods up to 3%
[60] PGD ResNet32 Fundoscopy Classification Accuracy increased by 40%
[61] I-FGSM U-Net, InvertNet, SLSDeep, NWCN, DCNN X-ray Segmentation The dice score metric is reduced by only up to 11%
[66] Gradient-based, Score-based, Decision-based NasnetLarge, InceptionResNetV2 X-ray Classification Accuracy increased by up to 9%
[72] FNAF U-Net, I-RIM MRI Reconstruction Up to 72% more resilient
[73] FNAF U-Net, I-RIM MRI Reconstruction Up to 72% more resilient
[74] DAG SegNet, U-Net, DenseNet All modalities Segmentation Detects adversarial samples with 98% ROC_AUC
[75] FGSM, I-FGSM, PGD, MIM, C&W U-Net, V-Net, InceptionResNetV2 Dermoscopy, X-ray Segmentation, Classification The accuracy is reduced by only up to 29%
[76] Limited Angle U-Net CT scans Reconstruction Not provided
[77] FGSM, I-FGSM, C&W CNN Fundoscopy, X-ray Classification The accuracy is reduced by only up to 24%
[78] FGSM, BIM, PGD, C&W, DF CNN X-ray, CT scans Classification The accuracy is reduced by only up to 2%
[79] ASMA ResNet-50, U-Net, DenseNet Dermoscopy, Fundoscopy Classification, Segmentation The accuracy is reduced by only up to 2%
[80] FGSM, BIM, PGD, MIM DenseNet121 X-ray Classification Detects adversarial samples with up to 97.5% accuracy
[81] FGSM, PGD, C&W ResNet18 Fundoscopy Classification Prediction accuracy under attack is 86.4%
[82] FGSM U-Net CT-Scans Segmentation Improves baseline methods up to 9% in terms of IoU
[83] FGSM, BIM, C&W DeepFool VGG, ResNet Microscopy Classification Detects adversarial samples with up to 99.95% accuracy
[84] APGD-CE, APGD-DLR, FAB-T, Square Attack ROG CT-Scans, MRI Segmentation Improves baseline methods up to 20% in terms of IoU
[85] PGD, GAP CheXNet, InceptionV3, Custom CNN Dermoscopy, X-ray, Fundoscopy Classification Improves standard defense method (adversarial training) by up to 9%

4.4. Benefits of Adversarially Robust Models

Creating models which are robust to adversarial attacks is crucial and especially in the medical domain. However, some studies have shown that adversarially robust models have some additional advantages. Lee et al. [86] proposed adversarial vertex mixup in order to overcome poor adversarial generalization. This method improves the robust generalization and decreases the trade-off between standard accuracy and adversarial robustness. Liu et al. [87] proposed a new framework, termed Neural SDE which incorporates several regularization mechanisms based on random noise injection. This framework creates more robust models as achieves better generalization and is resistant to adversarial and non-adversarial perturbations. Another interesting study [88], proposes adversarial robustness-based adaptive label smoothing (AR-AdaLS) which incorporates the correlations of adversarial robustness and uncertainty. The authors found that taking into account the adversarial robustness of the data within distribution, improves the calibration and stability of the model even under distributional shifts. Yi et al. [89] showed that adversarially trained models lead to an improved generalization on out-of-distribution data and this is quite important in medical image analysis. Adversarial learning not only improves adversarial accuracy, but also improves the models’ efficiency under various circumstances making them more robust in real life problems.

5. Implementation Aspects

5.1. Open-Source Libraries

Some open-source libraries are available helping to create adversarial attacks and defenses. In this way, we can implement novel attacking or defending methods and to study their robustness performance using these implemented libraries. CleverHans [90] is one of the most known Python libraries that creates adversarial examples with state-of-the-art attacks. Another well-known python library for attacks is Foolbox [91] and runs easily adversarial attacks in PyTorch, TensorFlow, and JAX. Adversarial Robustness Toolbox (ART) [92] is also a Python library and provides developers with adversarial attacks in order to test their models. Advbox Family [93] is an open-source toolbox that supports Python and provides adversarial attacks and defenses. Another Python toolbox for adversarial robustness research is AdverTorch [94], which is implemented in PyTorch and generates adversarial perturbations and defending against adversarial examples. Finally, DEEPSEC [95] is a uniform platform for security analysis of deep learning models and provides state-of-the-art adversarial attacks, defenses and relative utility metrics of them.

5.2. Source Codes and Datasets

Apart from the available open-source libraries discussed in the previous section, additional source codes that implement novel attack, defense, and attack detection methods are provided by the authors mainly via a GitHub repository.

This entry is adapted from the peer-reviewed paper 10.3390/electronics10172132

References

  1. Maliamanis, T.; Papakostas, G. Adversarial computer vision: A current snapshot. In Proceedings of the Twelfth International Conference on Machine Vision (ICMV 2019), Amsterdam, The Netherlands, 31 January 2020; p. 121.
  2. Papernot, N.; McDaniel, P.; Goodfellow, I. Transferability in Machine Learning: From Phenomena to Black-Box Attacks using Adversarial Samples. arXiv 2016, arXiv:1605.07277. Available online: http://arxiv.org/abs/1605.07277 (accessed on 4 June 2021).
  3. Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial Examples in the Physical World. arXiv 2017, arXiv:1607.02533. Available online: http://arxiv.org/abs/1607.02533 (accessed on 4 June 2021).
  4. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing Properties of Neural Networks. arXiv 2014, arXiv:1312.6199. Available online: http://arxiv.org/abs/1312.6199 (accessed on 4 June 2021).
  5. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1412.6572. Available online: http://arxiv.org/abs/1412.6572 (accessed on 4 June 2021).
  6. Xu, W.; Evans, D.; Qi, Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings of the 2018 Network and Distributed System Security Symposium, San Diego, CA, USA, 18–21 February 2018.
  7. Meng, D.; Chen, H. MagNet: A Two-Pronged Defense against Adversarial Examples. arXiv 2017, arXiv:1705.09064. Available online: http://arxiv.org/abs/1705.09064 (accessed on 4 June 2021).
  8. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2019, arXiv:1706.06083. Available online: http://arxiv.org/abs/1706.06083 (accessed on 4 June 2021).
  9. Paschali, M.; Conjeti, S.; Navarro, F.; Navab, N. Generalizability vs. Robustness: Investigating Medical Imaging Networks Using Adversarial Examples. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 11070, pp. 493–501.
  10. Chen, Y.-W.; Jain, L.C. (Eds.) Deep Learning in Healthcare: Paradigms and Applications; Springer International Publishing: Cham, Switzerland, 2020; Volume 171.
  11. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Med. Image Anal. 2017, 42, 60–88.
  12. Finlayson, S.G.; Bowers, J.D.; Ito, J.; Zittrain, J.L.; Beam, A.L.; Kohane, I.S. Adversarial attacks on medical machine learning. Science 2019, 363, 1287–1289.
  13. Xu, H.; Ma, Y.; Liu, H.-C.; Deb, D.; Liu, H.; Tang, J.-L.; Jain, A.K. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review. Int. J. Autom. Comput. 2020, 17, 151–178.
  14. Ren, K.; Zheng, T.; Qin, Z.; Liu, X. Adversarial Attacks and Defenses in Deep Learning. Engineering 2020, 6, 346–360.
  15. Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. arXiv 2017, arXiv:1608.04644. Available online: http://arxiv.org/abs/1608.04644 (accessed on 4 June 2021).
  16. Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial Settings. arXiv 2015, arXiv:1511.07528. Available online: http://arxiv.org/abs/1511.07528 (accessed on 4 June 2021).
  17. Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal Adversarial Perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 86–94.
  18. Xie, C.; Wang, J.; Zhang, Z.; Zhou, Y.; Xie, L.; Yuille, A. Adversarial Examples for Semantic Segmentation and Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1378–1387.
  19. Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; McDaniel, P. Ensemble Adversarial Training: Attacks and Defenses. arXiv 2020, arXiv:1705.07204. Available online: http://arxiv.org/abs/1705.07204 (accessed on 4 June 2021).
  20. Xie, C.; Wang, J.; Zhang, Z.; Ren, Z.; Yuille, A. Mitigating Adversarial Effects Through Randomization. arXiv 2018, arXiv:1711.01991. Available online: http://arxiv.org/abs/1711.01991 (accessed on 4 June 2021).
  21. Guo, Y.; Zhang, C.; Zhang, C.; Chen, Y. Sparse DNNs with Improved Adversarial Robustness. arXiv 2019, arXiv:1810.09619. Available online: http://arxiv.org/abs/1810.09619 (accessed on 4 June 2021).
  22. Wang, Y.; Jha, S.; Chaudhuri, K. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 5133–5142.
  23. Liu, X.; Li, Y.; Wu, C.; Hsieh, C.-J. Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network. arXiv 2019, arXiv:1810.01279. Available online: http://arxiv.org/abs/1810.01279 (accessed on 4 June 2021).
  24. Xiao, C.; Deng, R.; Li, B.; Yu, F.; Liu, M.; Song, D. Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation. In Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 11214, pp. 220–237.
  25. Ma, X.; Li, B.; Wang, Y.; Erfani, S.M.; Wijewickrema, S.; Schoenebeck, G.; Song, D.; Houle, M.E.; Bailey, J. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. arXiv 2018, arXiv:1801.02613. Available online: http://arxiv.org/abs/1801.02613 (accessed on 4 June 2021).
  26. Metzen, J.H.; Genewein, T.; Fischer, V.; Bischoff, B. On Detecting Adversarial Perturbations. arXiv 2017, arXiv:1702.04267. Available online: http://arxiv.org/abs/1702.04267 (accessed on 4 June 2021).
  27. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826.
  28. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. Available online: http://arxiv.org/abs/1704.04861 (accessed on 4 June 2021).
  29. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495.
  30. Jegou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1175–1183.
  31. Wetstein, S.C.; González-Gonzalo, C.; Bortsova, G.; Liefers, B.; Dubost, F.; Katramados, I.; Hogeweg, L.; van Ginneken, B.; Pluim, J.P.W.; de Bruijne, M.; et al. Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors. arXiv 2020, arXiv:2006.06356. Available online: http://arxiv.org/abs/2006.06356 (accessed on 4 June 2021).
  32. Finlayson, S.G.; Chung, H.W.; Kohane, I.S.; Beam, A.L. Adversarial Attacks Against Medical Deep Learning Systems. arXiv 2019, arXiv:1804.05296. Available online: http://arxiv.org/abs/1804.05296 (accessed on 4 June 2021).
  33. Cheng, G.; Ji, H. Adversarial Perturbation on MRI Modalities in Brain Tumor Segmentation. IEEE Access 2020, 8, 206009–206015.
  34. Li, Y.; Zhang, H.; Bermudez, C.; Chen, Y.; Landman, B.A.; Vorobeychik, Y. Anatomical context protects deep learning from adversarial perturbations in medical imaging. Neurocomputing 2020, 379, 370–378.
  35. Huq, A.; Pervin, M.T. Analysis of Adversarial Attacks on Skin Cancer Recognition. In Proceedings of the 2020 International Conference on Data Science and Its Applications (ICoDSA), Bandung, Indonesia, 5–6 August 2020; pp. 1–4.
  36. Anand, D.; Tank, D.; Tibrewal, H.; Sethi, A. Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis Against Adversarial Attacks. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1159–1163.
  37. Paul, R.; Schabath, M.; Gillies, R.; Hall, L.; Goldgof, D. Mitigating Adversarial Attacks on Medical Image Understanding Systems. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1517–1521.
  38. Risk Susceptibility of Brain Tumor Classification to Adversarial Attacks|SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-3-030-31964-9_17 (accessed on 4 June 2021).
  39. Shah, A.; Lynch, S.; Niemeijer, M.; Amelon, R.; Clarida, W.; Folk, J.; Russell, S.; Wu, X.; Abramoff, M.D. Susceptibility to misdiagnosis of adversarial images by deep learning based retinal image analysis algorithms. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1454–1457.
  40. Kovalev, V.; Voynov, D. Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks. In Pattern Recognition and Information Processing; Ablameyko, S.V., Krasnoproshin, V.V., Lukashevich, M.M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 1055, pp. 301–311.
  41. Li, Y.; Zhu, Z.; Zhou, Y.; Xia, Y.; Shen, W.; Fishman, E.K.; Yuille, A.L. Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine Framework and Its Adversarial Examples. arXiv 2019, arXiv:2010.16074. Available online: http://arxiv.org/abs/2010.16074 (accessed on 4 June 2021).
  42. Allyn, J.; Allou, N.; Vidal, C.; Renou, A.; Ferdynus, C. Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations. Medicine 2020, 99, e23568.
  43. Hirano, H.; Minagi, A.; Takemoto, K. Universal adversarial attacks on deep neural networks for medical image classification. BMC Med. Imaging 2021, 21, 9.
  44. Hirano, H.; Koga, K.; Takemoto, K. Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks. PLoS ONE 2020, 15, e0243963.
  45. Ma, X.; Niu, Y.; Gu, L.; Wang, Y.; Zhao, Y.; Bailey, J.; Lu, F. Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems. arXiv 2020, arXiv:1907.10456. Available online: http://arxiv.org/abs/1907.10456 (accessed on 4 June 2021).
  46. Pal, B.; Gupta, D.; Rashed-Al-Mahfuz, M.; Alyami, S.A.; Moni, M.A. Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images. Appl. Sci. 2021, 11, 4233.
  47. Bortsova, G.; Dubost, F.; Hogeweg, L.; Katramados, I.; de Bruijne, M. Adversarial Heart Attack: Neural Networks Fooled to Segment Heart Symbols in Chest X-Ray Images. arXiv 2021, arXiv:2104.00139. Available online: http://arxiv.org/abs/2104.00139 (accessed on 10 August 2021).
  48. On the Assessment of Robustness of Telemedicine Applications against Adversarial Machine Learning Attacks | SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-3-030-79457-6_44?error=cookies_not_supported&code=3acd5697-d1ba-4ca5-8077-3d1b5d9bae9a (accessed on 10 August 2021).
  49. Byra, M.; Styczynski, G.; Szmigielski, C.; Kalinowski, P.; Michalowski, L.; Paluszkiewicz, R.; Ziarkiewicz-Wroblewska, B.; Zieniewicz, K.; Nowicki, A. Adversarial Attacks on Deep Learning Models for Fatty Liver Disease Classification by Modification of Ultrasound Image Reconstruction Method. arXiv 2020, arXiv:2009.03364. Available online: http://arxiv.org/abs/2009.03364 (accessed on 4 June 2021).
  50. Chen, P.-Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.-J. ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; pp. 15–26.
  51. Ozbulak, U.; Van Messem, A.; De Neve, W. Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation. arXiv 2019, arXiv:1907.13124. Available online: http://arxiv.org/abs/1907.13124 (accessed on 4 June 2021).
  52. Pena-Betancor, C.; Gonzalez-Hernandez, M.; Fumero-Batista, F.; Sigut, J.; Medina-Mesa, E.; Alayon, S.; Gonzalez de la Rosa, M. Estimation of the Relative Amount of Hemoglobin in the Cup and Neuroretinal Rim Using Stereoscopic Color Fundus Images. Investig. Ophthalmol. Vis. Sci. 2015, 56, 1562–1568.
  53. Codella, N.C.F.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2018, arXiv:1710.05006. Available online: http://arxiv.org/abs/1710.05006 (accessed on 4 June 2021).
  54. Kugler, D. Physical Attacks in Dermoscopy: An Evaluation of Robustness for clinical Deep-Learning. J. Mach. Learn. Biomed. Imaging 2021, 7, 1–32.
  55. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. Intelligent Image Synthesis to Attack a Segmentation CNN Using Adversarial Learning. arXiv 2019, arXiv:1909.11167. Available online: http://arxiv.org/abs/1909.11167 (accessed on 4 June 2021).
  56. Tian, B.; Guo, Q.; Juefei-Xu, F.; Chan, W.L.; Cheng, Y.; Li, X.; Xie, X.; Qin, S. Bias Field Poses a Threat to DNN-based X-Ray Recognition. arXiv 2021, arXiv:2009.09247. Available online: http://arxiv.org/abs/2009.09247 (accessed on 4 June 2021).
  57. Yao, Q.; He, Z.; Lin, Y.; Ma, K.; Zheng, Y.; Zhou, S.K. A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks. arXiv 2021, arXiv:2012.09501. Available online: http://arxiv.org/abs/2012.09501 (accessed on 4 June 2021).
  58. Shao, M.; Zhang, G.; Zuo, W.; Meng, D. Target attack on biomedical image segmentation model based on multi-scale gradients. Inf. Sci. 2021, 554, 33–46.
  59. Qi, G.; Gong, L.; Song, Y.; Ma, K.; Zheng, Y. Stabilized Medical Image Attacks. arXiv 2021, arXiv:2103.05232. Available online: http://arxiv.org/abs/2103.05232 (accessed on 10 August 2021).
  60. Wu, D.; Liu, S.; Ban, J. Classification of Diabetic Retinopathy Using Adversarial Training. IOP Conf. Ser. Mater. Sci. Eng. 2020, 806, 012050.
  61. He, X.; Yang, S.; Li, G.; Li, H.; Chang, H.; Yu, Y. Non-Local Context Encoder: Robust Biomedical Image Segmentation against Adversarial Attacks. AAAI 2019, 33, 8417–8424.
  62. Novikov, A.A.; Lenis, D.; Major, D.; Hladůvka, J.; Wimmer, M.; Bühler, K. Fully Convolutional Architectures for Multi-Class Segmentation in Chest Radiographs. arXiv 2018, arXiv:1701.08816. Available online: http://arxiv.org/abs/1701.08816 (accessed on 4 June 2021).
  63. Sarker, M.M.K.; Rashwan, H.A.; Akram, F.; Banu, S.F.; Saleh, A.; Singh, V.K.; Chowdhury, F.U.H.; Abdulwahab, S.; Romani, S.; Radeva, P.; et al. SLSDeep: Skin Lesion Segmentation Based on Dilated Residual and Pyramid Pooling Networks. arXiv 2018, arXiv:1805.10241. Available online: http://arxiv.org/abs/1805.10241 (accessed on 4 June 2021).
  64. Hwang, S.; Park, S. Accurate Lung Segmentation via Network-Wise Training of Convolutional Networks. arXiv 2017, arXiv:1708.00710. Available online: http://arxiv.org/abs/1708.00710 (accessed on 4 June 2021).
  65. Yuan, Y. Automatic skin lesion segmentation with fully convolutional-deconvolutional networks. IEEE J. Biomed. Health Inform. 2019, 23, 519–526.
  66. Taghanaki, S.A.; Das, A.; Hamarneh, G. Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks. arXiv 2018, arXiv:1807.02905. Available online: http://arxiv.org/abs/1807.02905 (accessed on 4 June 2021).
  67. Feinman, R.; Curtin, R.R.; Shintre, S.; Gardner, A.B. Detecting Adversarial Samples from Artifacts. arXiv 2017, arXiv:1703.00410. Available online: http://arxiv.org/abs/1703.00410 (accessed on 4 June 2021).
  68. Ren, X.; Zhang, L.; Wei, D.; Shen, D.; Wang, Q. Brain MR Image Segmentation in Small Dataset with Adversarial Defense and Task Reorganization. In Machine Learning in Medical Imaging; Suk, H.-I., Liu, M., Yan, P., Lian, C., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 11861, pp. 1–8.
  69. Liu, S.; Setio, A.A.A.; Ghesu, F.C.; Gibson, E.; Grbic, S.; Georgescu, B.; Comaniciu, D. No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks. arXiv 2020, arXiv:2003.03824. Available online: http://arxiv.org/abs/2003.03824 (accessed on 4 June 2021).
  70. Vatian, A.; Gusarova, N.; Dobrenko, N.; Dudorov, S.; Nigmatullin, N.; Shalyto, A.; Lobantsev, A. Impact of Adversarial Examples on the Efficiency of Interpretation and Use of Information from High-Tech Medical Images. In Proceedings of the 2019 24th Conference of Open Innovations Association (FRUCT), Moscow, Russia, 8–12 April 2019; pp. 472–478.
  71. Chen, C.; Qin, C.; Qiu, H.; Ouyang, C.; Wang, S.; Chen, L.; Tarroni, G.; Bai, W.; Rueckert, D. Realistic Adversarial Data Augmentation for MR Image Segmentation. arXiv 2020, arXiv:2006.13322. Available online: http://arxiv.org/abs/2006.13322 (accessed on 4 June 2021).
  72. Cheng, K.; Caliva, F.; Shah, R.; Han, M.; Majumdar, S.; Pedoia, V. Addressing the False Negative Problem of Deep Learning MRI Reconstruction Models by Adversarial Attacks and Robust Training. Proc. Mach. Learn. Res. 2020, 121, 121–135.
  73. Calivá, F.; Cheng, K.; Shah, R.; Pedoia, V. Adversarial Robust Training of Deep Learning MRI Reconstruction Models. arXiv 2021, arXiv:2011.00070. Available online: http://arxiv.org/abs/2011.00070 (accessed on 4 June 2021).
  74. Park, H.; Bayat, A.; Sabokrou, M.; Kirschke, J.S.; Menze, B.H. Robustification of Segmentation Models Against Adversarial Perturbations in Medical Imaging. arXiv 2020, arXiv:2009.11090. Available online: http://arxiv.org/abs/2009.11090 (accessed on 4 June 2021).
  75. Taghanaki, S.A.; Abhishek, K.; Azizi, S.; Hamarneh, G. A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 11332–11341.
  76. Huang, Y.; Würfl, T.; Breininger, K.; Liu, L.; Lauritsch, G.; Maier, A. Some Investigations on Robustness of Deep Learning in Limited Angle Tomography. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 11070, pp. 145–153.
  77. Xue, F.-F.; Peng, J.; Wang, R.; Zhang, Q.; Zheng, W.-S. Improving Robustness of Medical Image Diagnosis with Denoising Convolutional Neural Networks. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019; Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 11769, pp. 846–854.
  78. Tripathi, A.M.; Mishra, A. Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks on Deep COVID-19 Models. arXiv 2020, arXiv:2009.04004. Available online: http://arxiv.org/abs/2009.04004 (accessed on 4 June 2021).
  79. Defending Deep Learning-Based Biomedical Image Segmentation from Adversarial Attacks: A Low-Cost Frequency Refinement Approach|SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-3-030-59719-1_34 (accessed on 4 June 2021).
  80. Li, X.; Zhu, D. Robust Detection of Adversarial Attacks on Medical Images. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1154–1158.
  81. Li, X.; Pan, D.; Zhu, D. Defending against Adversarial Attacks on Medical Imaging AI System, Classification or Detection? arXiv 2020, arXiv:2006.13555. Available online: http://arxiv.org/abs/2006.13555 (accessed on 4 June 2021).
  82. Pervin, M.T.; Tao, L.; Huq, A.; He, Z.; Huo, L. Adversarial Attack Driven Data Augmentation for Accurate and Robust Medical Image Segmentation. arXiv 2021, arXiv:2105.12106. Available online: http://arxiv.org/abs/2105.12106 (accessed on 10 August 2021).
  83. Uwimana1, A.; Senanayake, R. Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis. arXiv 2021, arXiv:2107.04882. Available online: http://arxiv.org/abs/2107.04882 (accessed on 10 August 2021).
  84. Daza, L.; Pérez, J.C.; Arbeláez, P. Towards Robust General Medical Image Segmentation. arXiv 2021, arXiv:2107.04263. Available online: http://arxiv.org/abs/2107.04263 (accessed on 10 August 2021).
  85. Xu, M.; Zhang, T.; Li, Z.; Liu, M.; Zhang, D. Towards evaluating the robustness of deep diagnostic models by adversarial attack. Med. Image Anal. 2021, 69, 101977.
  86. Lee, S.; Lee, H.; Yoon, S. Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 269–278.
  87. Liu, X.; Xiao, T.; Si, S.; Cao, Q.; Kumar, S.; Hsieh, C.-J. How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 279–287.
  88. Qin, Y.; Wang, X.; Beutel, A.; Chi, E.H. Improving Uncertainty Estimates through the Relationship with Adversarial Robustness. arXiv 2020, arXiv:2006.16375. Available online: http://arxiv.org/abs/2006.16375 (accessed on 10 August 2021).
  89. Yi, M.; Hou, L.; Sun, J.; Shang, L.; Jiang, X.; Liu, Q.; Ma, Z.-M. Improved OOD Generalization via Adversarial Training and Pre-training. arXiv 2021, arXiv:2105.11144. Available online: http://arxiv.org/abs/2105.11144 (accessed on 10 August 2021).
  90. Papernot, N.; Faghri, F.; Carlini, N.; Goodfellow, I.; Feinman, R.; Kurakin, A.; Xie, C.; Sharma, Y.; Brown, T.; Roy, A.; et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv 2018, arXiv:1610.00768. Available online: http://arxiv.org/abs/1610.00768 (accessed on 4 June 2021).
  91. Rauber, J.; Zimmermann, R.; Bethge, M.; Brendel, W. Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. JOSS 2020, 5, 2607.
  92. Nicolae, M.-I.; Sinn, M.; Tran, M.N.; Buesser, B.; Rawat, A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.; Ludwig, H.; et al. Adversarial Robustness Toolbox v1.0.0. arXiv 2019, arXiv:1807.01069. Available online: http://arxiv.org/abs/1807.01069 (accessed on 4 June 2021).
  93. Goodman, D.; Xin, H.; Yang, W.; Yuesheng, W.; Junfeng, X.; Huan, Z. Advbox: A Toolbox to Generate Adversarial Examples that Fool Neural Networks. arXiv 2020, arXiv:2001.05574. Available online: http://arxiv.org/abs/2001.05574 (accessed on 4 June 2021).
  94. Ding, G.W.; Wang, L.; Jin, X. advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch. arXiv 2019, arXiv:1902.07623. Available online: http://arxiv.org/abs/1902.07623 (accessed on 4 June 2021).
  95. Ling, X.; Ji, S.; Zou, J.; Wang, J.; Wu, C.; Li, B.; Wang, T. DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 673–690.
More
This entry is offline, you can click here to edit this entry!
Video Production Service