papers in adversarial machine learning — adversarial attack
Smiling is all you need: fooling identity recognition by having emotions
Posted by Dillon Niederhut on
Previous attacks on automated identity recognition systems used large and obvious physical accessories, like giant sunglasses. It's possible to use something more subtle -- like a specific facial expression -- to trick one of these systems into believing you are another person. However, you will need to have control of a large fraction of the photographs of interest to get a good attack success rate, which could be achievable inside "walled garden" image hosting websites like Facebook.
Wear your sunglasses at night : fooling identity recognition with physical accessories
Posted by Dillon Niederhut on
Using photographs of faces is becoming more and more common in automated identification systems, either for authentication or for surveillance. When these systems are based on machine learning models for face recognition, they are vulnerable to data poisoning attacks. By injecting as little as 50 watermarked images into the training set, you can force a model to misidentify you by putting on a physical accessory, like a pair of sunglasses.
A faster way to generate backdoor attacks
Posted by Dillon Niederhut on
Data poisoning attacks are very effective because they attack a model when it is most vulnerable, but poisoned images are expensive to compute. Here, we discuss two cheaper heuristics we can use -- feature alignment and watermarking -- how they work, and how effective they are at attacking computer vision systems.
Poisoning deep learning algorithms
Posted by Dillon Niederhut on
With more and more deep learning models being trained from public data, there is a risk of poisoned data being fed to these models during training. Here, we talk about one approach to constructing poisoned training data to attack deep learning models.
Evading detection with a wearable adversarial t-shirt
Posted by Dillon Niederhut on
What if we could print an adversarial attack that evades detection by computer algorithms on the clothes you wear every day? This turns out to be a hard problem, because of the way fabric folds and shifts. Luckily, you can modify an attack training algorithm to incorporate that very behavior -- giving you your own adversarial t-shirt.