papers in adversarial machine learning — adversarial defense

You Only Look Eighty times: defending object detectors with repeated masking

Posted by Dillon Niederhut on

Adversarial patches pose a tricky problem in object detection, because any solution needs to apply to both an unknown number of objects and patches. Relaxing the problem to defending against evasion attacks only lets you re-use the masking approach from certified object classification with some success.

Read more →


Minority reports (yes like the movie) as a machine learning defense

Posted by Dillon Niederhut on

Adversarial patch attacks are hard to defend against because they are robust to denoising-based defenses. A more effective strategy involves generating several partially occluded versions of the input image, getting a set of predictions, and then taking the *least common* predicted label.

Read more →


Know thy enemy : classifying attackers with adversarial fingerprinting

Posted by Dillon Niederhut on

In threat intelligence, you want to know the characteristics of possible adversaries. In the world of machine learning, this could mean keeping a database of "fingerprints" of known attacks, and using these to inform real time defense strategies if your inference system comes under attack. Would you like to know more?

Read more →


Steganalysis based detection of adversarial attacks

Posted by Dillon Niederhut on

Training adversarially robust machine learning models can be expensive. Instead, you can use a set of steganalysis approaches to detect malicious inputs before they hit your model. This reduces the cost of deployment and training while still promoting AI safety.

Read more →