papers in adversarial machine learning — adversarial attack
Evading CCTV cameras with adversarial patches
Posted by Dillon Niederhut on
Adversarial patches showed a lot promise in 2017 for confusing object detection algorithms -- by making bananas look like a toaster. But what if you want the bananas to disappear? This blog post summarizes a 2019 paper showing how an adversarial patch can conduct an evasion attack, to avoid detection at all.
Fooling AI in real life with adversarial patches
Posted by Dillon Niederhut on
Adding small pixel changes won't be a successful adversarial attack in real life, because those changes get lost in lighting/shadows/dust on the camera lens. A newer technique -- adversarial patches -- provides a method for fooling object detection algorithms that are deployed in the real world.
What is adversarial machine learning?
Posted by Dillon Niederhut on
You might not be aware of something very interesting -- that the big fancy neural networks that companies like Google and Facebook use inside their products are actually quite easy to fool. Here's how it works.