papers in adversarial machine learning

We're not so different, you and I: adversarial attacks are poisonous training samples

Posted by Dillon Niederhut on

Data poisoning is when someone adds small changes to a training dataset to cause any model trained on those data to misbehave. An effective heuristic approach involves generating adversarial examples instead. The authors show degradations in model accuracy that are worse than random chance performance.

Read more →


How to tell if someone trained a model on your data

Posted by Dillon Niederhut on

The last three papers that we've read, backdoor attacks, wear sunglasses, and smile more, all used some variety of an image watermark in order to control the behavior of a model. These authors showed us that you could take some pattern (like a funky pair of sunglasses), and overlay it on some pictures of the same thing (say, a toaster), and then put those images in a place where an unlucky machine learning engineer might find them. If your watermarked images get used to train a model, you can make that model incorrectly predict that there is a toaster in...

Read more →


Smiling is all you need: fooling identity recognition by having emotions

Posted by Dillon Niederhut on

Previous attacks on automated identity recognition systems used large and obvious physical accessories, like giant sunglasses. It's possible to use something more subtle -- like a specific facial expression -- to trick one of these systems into believing you are another person. However, you will need to have control of a large fraction of the photographs of interest to get a good attack success rate, which could be achievable inside "walled garden" image hosting websites like Facebook.

Read more →


Wear your sunglasses at night : fooling identity recognition with physical accessories

Posted by Dillon Niederhut on

Using photographs of faces is becoming more and more common in automated identification systems, either for authentication or for surveillance. When these systems are based on machine learning models for face recognition, they are vulnerable to data poisoning attacks. By injecting as little as 50 watermarked images into the training set, you can force a model to misidentify you by putting on a physical accessory, like a pair of sunglasses.

Read more →


A faster way to generate backdoor attacks

Posted by Dillon Niederhut on

Data poisoning attacks are very effective because they attack a model when it is most vulnerable, but poisoned images are expensive to compute. Here, we discuss two cheaper heuristics we can use -- feature alignment and watermarking -- how they work, and how effective they are at attacking computer vision systems.

Read more →