Poisoning Attacks on Deep Neural Networks
  • View Times: 28
  • |
  • Release Date: 2022-11-25
  • deep neural networks
  • adversarial attacks
  • poisoning
  • backdoors
Video Introduction

This video is adapted from 10.3390/app122111053

Deep neural networks (DNNs) have successfully delivered cutting-edge performance in several fields. With the broader deployment of DNN models on critical applications, the security of DNNs has become an active and yet nascent area. Attacks against DNNs can have catastrophic results, according to recent studies. Poisoning attacks, including backdoor attacks and Trojan attacks, are one of the growing threats against DNNs. Having a wide-angle view of these evolving threats is essential to better understand the security issues. In this regard, creating a semantic model and a knowledge graph for poisoning attacks can reveal the relationships between attacks across intricate data to enhance the security knowledge landscape.

Full Transcript
Video Production Service