Security for everyone

Security Issues in Deep Learning

SecurityForEveryone

Security for Everyone

03/Nov/20

Developments in deep learning and artificial intelligence making our lives easier day by day. They are suggesting what we want to buy, driving our cars, optimizing traffic, taking care of our medical diagnosis, and many other things. In the future, the expectation is using deep learning in courts. It is known that the Chinese government uses a mass surveillance system that gives scores to each individual. Most probably they are also using some deep learning/AI algorithms. In this article, we will mention some security and privacy concerns about using deep learning.

There are two types of attacks against deep learning models we know so far. The first one is a poisoning attack that infers the training step of the model. The second one is the evasion attack that occurs in inference time causes misclassification of the given input.

Known poisoning attacks against deep learning models are below:

  • Back-gradient optimization
  • Generative method
  • Poisoning GAN
  • Back-gradient optimization
  • Poisoning attack using influence function
  • Clean-label feature collision attack
  • Convex polytope attack
  • Backdoor attack
  • Trojaning attack
  • Invisible backdoor attack
  • Clean-label backdoor attack

 

Some of the known evasion attacks against deep learning models are below:

  • Human AE
  • Policy induction attack
  • Physical world AE
  • Membership training
  • Malware classification
  • Adversarial attacks on policies
  • Generating natural adversarial attack
  • Constructing unrestricted attack
  • Semantic adversarial attack
  • BPDA
  • Momentum iFGSM
  • AS attack
  • ATN
  • Attacks on RL
  • UAP

 

Both types of attacks aim the model to be not functioned or malfunctioned. Adding noise to inputs or adding small numbers of poisoned data to the training set are some samples of attacks. Luckily defense methods exist against such attacks. We suggest deep learning practitioners that be aware of such vulnerabilities and how to secure their models.

Other than securing the model against altering attacks, you should also concern the privacy issues. There are potential risks that train models leak private information even without revealing the dataset. Recent studies show that a model inversion is possible, by recovering training images. Moreover, since most deep learning applications running on cloud servers; users may lose control of their own data. Voice recognition and face recognition tools are such examples.

Lastly, we would like to name some defense techniques. Homomorphic Encryption is applied to input data to the model. There are several different methods for using it. Some of them are costly since it requires changes in model implementation. Differential Privacy is another example of defending the model. It provides to protect training data from inversion attacks.

Deep learning is starting to involve a huge part of our lives next to IoT devices. Most products on the market that we are using with or without noticing do not consider security well due to a lack of standardization. Therefore we shall not ignore the security issues brought by developments of new technologies.

cyber security services for everyone one. Free security tools, continuous vulnerability scanning and many more.
Try it yourself,
control security posture