papers in adversarial machine learning — computer vision

Smiling is all you need: fooling identity recognition by having emotions

Posted by Dillon Niederhut on

Previous attacks on automated identity recognition systems used large and obvious physical accessories, like giant sunglasses. It's possible to use something more subtle -- like a specific facial expression -- to trick one of these systems into believing you are another person. However, you will need to have control of a large fraction of the photographs of interest to get a good attack success rate, which could be achievable inside "walled garden" image hosting websites like Facebook.

Read more →


Evading detection with a wearable adversarial t-shirt

Posted by Dillon Niederhut on

What if we could print an adversarial attack that evades detection by computer algorithms on the clothes you wear every day? This turns out to be a hard problem, because of the way fabric folds and shifts. Luckily, you can modify an attack training algorithm to incorporate that very behavior -- giving you your own adversarial t-shirt.

Read more →


Evading CCTV cameras with adversarial patches

Posted by Dillon Niederhut on

Adversarial patches showed a lot promise in 2017 for confusing object detection algorithms -- by making bananas look like a toaster. But what if you want the bananas to disappear? This blog post summarizes a 2019 paper showing how an adversarial patch can conduct an evasion attack, to avoid detection at all.

Read more →


Fooling AI in real life with adversarial patches

Posted by Dillon Niederhut on

Adding small pixel changes won't be a successful adversarial attack in real life, because those changes get lost in lighting/shadows/dust on the camera lens. A newer technique -- adversarial patches -- provides a method for fooling object detection algorithms that are deployed in the real world.

Read more →


What is adversarial machine learning?

Posted by Dillon Niederhut on

You might not be aware of something very interesting -- that the big fancy neural networks that companies like Google and Facebook use inside their products are actually quite easy to fool. Here's how it works.

Read more →