papers in adversarial machine learning — adversarial patch
You Only Look Eighty times: defending object detectors with repeated masking
Posted by Dillon Niederhut on
Adversarial patches pose a tricky problem in object detection, because any solution needs to apply to both an unknown number of objects and patches. Relaxing the problem to defending against evasion attacks only lets you re-use the masking approach from certified object classification with some success.
Minority reports (yes like the movie) as a machine learning defense
Posted by Dillon Niederhut on
Adversarial patch attacks are hard to defend against because they are robust to denoising-based defenses. A more effective strategy involves generating several partially occluded versions of the input image, getting a set of predictions, and then taking the *least common* predicted label.
Anti-adversarial examples: what to do if you want to be seen?
Posted by Dillon Niederhut on
Most uses of adversarial machine learning involve attacking or bypassing a computer vision system that someone else has designed. However, you can use the same tools to generate "unadversarial" examples, that give machine learning models much better performance when deployed in real life.
Adversarial patch attacks on self-driving cars
Posted by Dillon Niederhut on
Self-driving cars rely on vision for safety-critical information like traffic rules, which makes them susceptible to adversarial machine learning attacks. Some carefully placed stickers on a stop sign can make it invisible to autonomous vehicles; or, an adversarial t-shirt can make a person look like a stop sign.
Faceoff : using stickers to fool Face ID
Posted by Dillon Niederhut on
What if breaking into an office was as easy as wearing a special pair of glasses, or putting a sticker on your forehead? It can be, if you make the right adversarial patch. Learn how to use adversarial machine learning to hide from face recognition systems, or convince them that you are someone else.