Internet of Things (IoT) technologies serve as a backbone of cutting-edge intelligent systems. Machine Learning (ML) paradigms have been adopted within IoT environments to exploit their capabilities to mine complex patterns. Despite the reported promising results, ML-based solutions exhibit several security vulnerabilities and threats. Specifically, Adversarial Machine Learning (AML) attacks can drastically impact the performance of ML models. It also represents a promising research field that typically promotes novel techniques to generate and/or defend against Adversarial Examples (AE) attacks.
where c is a hyper-parameter that is randomly initialized by linear search; ∥∥X−X’∥∥2 is L2-norm and lossF,l(*)
where Xadv is the adversarial example of X, X is the original sample and ϵ is a hyper-parameter that is randomly initialized to control the amplitude of the disturbance. Additionally, sign (_) is a sign function, and J (_) is the cost function with respect to the original sample with the correct label Ytrue and ∇XJ is the gradient of X. However, this method is subjected to label leaking where other researchers suggest replacing the correct label Ytrue
denotes [X − ϵ, X + ϵ] :
where the least-likely probability class is represented by YLL
according to the following formula:
This entry is adapted from the peer-reviewed paper 10.3390/app13106001