papers in adversarial machine learning — availability attack

Poisoning deep learning algorithms

Posted by Dillon Niederhut on

With more and more deep learning models being trained from public data, there is a risk of poisoned data being fed to these models during training. Here, we talk about one approach to constructing poisoned training data to attack deep learning models.

Read more →