papers in adversarial machine learning — sticker
Fooling AI in real life with adversarial patches
Posted by Dillon Niederhut on
Adding small pixel changes won't be a successful adversarial attack in real life, because those changes get lost in lighting/shadows/dust on the camera lens. A newer technique -- adversarial patches -- provides a method for fooling object detection algorithms that are deployed in the real world.