papers in adversarial machine learning — evasion attack

You Only Look Eighty times: defending object detectors with repeated masking

Posted by Dillon Niederhut on

Adversarial patches pose a tricky problem in object detection, because any solution needs to apply to both an unknown number of objects and patches. Relaxing the problem to defending against evasion attacks only lets you re-use the masking approach from certified object classification with some success.

Read more →


Faceoff : using stickers to fool Face ID

Posted by Dillon Niederhut on

What if breaking into an office was as easy as wearing a special pair of glasses, or putting a sticker on your forehead? It can be, if you make the right adversarial patch. Learn how to use adversarial machine learning to hide from face recognition systems, or convince them that you are someone else.

Read more →


Is it illegal to hack a machine learning model?

Posted by Dillon Niederhut on

Maybe.

Read more →


Evading detection with a wearable adversarial t-shirt

Posted by Dillon Niederhut on

What if we could print an adversarial attack that evades detection by computer algorithms on the clothes you wear every day? This turns out to be a hard problem, because of the way fabric folds and shifts. Luckily, you can modify an attack training algorithm to incorporate that very behavior -- giving you your own adversarial t-shirt.

Read more →


Evading CCTV cameras with adversarial patches

Posted by Dillon Niederhut on

Adversarial patches showed a lot promise in 2017 for confusing object detection algorithms -- by making bananas look like a toaster. But what if you want the bananas to disappear? This blog post summarizes a 2019 paper showing how an adversarial patch can conduct an evasion attack, to avoid detection at all.

Read more →