Use code cybermonday for 15% off your next order!



papers in adversarial machine learning

Adversarial patch attacks on self-driving cars

Posted by Dillon Niederhut on

Self-driving cars rely on vision for safety-critical information like traffic rules, which makes them susceptible to adversarial machine learning attacks. Some carefully placed stickers on a stop sign can make it invisible to autonomous vehicles; or, an adversarial t-shirt can make a person look like a stop sign.

Read more →


Faceoff : using stickers to fool Face ID

Posted by Dillon Niederhut on

What if breaking into an office was as easy as wearing a special pair of glasses, or putting a sticker on your forehead? It can be, if you make the right adversarial patch. Learn how to use adversarial machine learning to hide from face recognition systems, or convince them that you are someone else.

Read more →


Spy GANs : using adversarial watermarks to send secret messages

Posted by Dillon Niederhut on

Sometimes, you need to send encrypted information, but also keep the fact that you are sending it a secret. Hiding secrets in regular data like this is called steganography, and it's cooler than it sounds, unless you are super into stegosaurus, and then it is exactly as cool as it sounds. With a few tweaks, you can use adversarial watermarking to hide information in normal-looking images and text. See how to do it here.

Read more →


When reality is your adversary: failure modes of image recognition

Posted by Dillon Niederhut on

Machine learning models surpass human performance on image recognition tasks, but they can fail in surprising ways. By cataloguing these "natural" adversarial examples, you can learn a lot about how computer vision models work. You also learn that if you paint enough things yellow, computers will think the world is bananas.

Read more →


Is it illegal to hack a machine learning model?

Posted by Dillon Niederhut on

Maybe.

Read more →