papers in adversarial machine learning
How Glaze and Nighshade try to protect artists
Posted by Dillon Niederhut on
Generative AI models have become increasingly effective at making usable art. Where does this leave artists? They can use tools like Glaze and Nightshade to discourage others from training models to reproduce their art, but this might not always work, and carries legal risks. Here's how they work.
How to protect yourself from AI
Posted by Dillon Niederhut on
The potential for harm from AI is growing with its increased usage in products and services. It's not possible for even well-intentioned companies to mitigate all the harm these systems might cause, but individuals can also use technology to shift these systems' behaviors toward their own preferences. Examples of this include adversarial patches to avoid surveillance, and data poisoning for fair loan acceptance.
You Only Look Eighty times: defending object detectors with repeated masking
Posted by Dillon Niederhut on
Adversarial patches pose a tricky problem in object detection, because any solution needs to apply to both an unknown number of objects and patches. Relaxing the problem to defending against evasion attacks only lets you re-use the masking approach from certified object classification with some success.
Minority reports (yes like the movie) as a machine learning defense
Posted by Dillon Niederhut on
Adversarial patch attacks are hard to defend against because they are robust to denoising-based defenses. A more effective strategy involves generating several partially occluded versions of the input image, getting a set of predictions, and then taking the *least common* predicted label.
Know thy enemy : classifying attackers with adversarial fingerprinting
Posted by Dillon Niederhut on
In threat intelligence, you want to know the characteristics of possible adversaries. In the world of machine learning, this could mean keeping a database of "fingerprints" of known attacks, and using these to inform real time defense strategies if your inference system comes under attack. Would you like to know more?