papers in adversarial machine learning — computer vision
Anti-adversarial examples: what to do if you want to be seen?
Posted by Dillon Niederhut on
Most uses of adversarial machine learning involve attacking or bypassing a computer vision system that someone else has designed. However, you can use the same tools to generate "unadversarial" examples, that give machine learning models much better performance when deployed in real life.
We're not so different, you and I: adversarial attacks are poisonous training samples
Posted by Dillon Niederhut on
Data poisoning is when someone adds small changes to a training dataset to cause any model trained on those data to misbehave. An effective heuristic approach involves generating adversarial examples instead. The authors show degradations in model accuracy that are worse than random chance performance.
Smiling is all you need: fooling identity recognition by having emotions
Posted by Dillon Niederhut on
Previous attacks on automated identity recognition systems used large and obvious physical accessories, like giant sunglasses. It's possible to use something more subtle -- like a specific facial expression -- to trick one of these systems into believing you are another person. However, you will need to have control of a large fraction of the photographs of interest to get a good attack success rate, which could be achievable inside "walled garden" image hosting websites like Facebook.
Evading detection with a wearable adversarial t-shirt
Posted by Dillon Niederhut on
What if we could print an adversarial attack that evades detection by computer algorithms on the clothes you wear every day? This turns out to be a hard problem, because of the way fabric folds and shifts. Luckily, you can modify an attack training algorithm to incorporate that very behavior -- giving you your own adversarial t-shirt.
Evading CCTV cameras with adversarial patches
Posted by Dillon Niederhut on
Adversarial patches showed a lot promise in 2017 for confusing object detection algorithms -- by making bananas look like a toaster. But what if you want the bananas to disappear? This blog post summarizes a 2019 paper showing how an adversarial patch can conduct an evasion attack, to avoid detection at all.