papers in adversarial machine learning — ai safety

How Glaze and Nighshade try to protect artists

Posted by Dillon Niederhut on

Generative AI models have become increasingly effective at making usable art. Where does this leave artists? They can use tools like Glaze and Nightshade to discourage others from training models to reproduce their art, but this might not always work, and carries legal risks. Here's how they work.

Read more →


How to protect yourself from AI

Posted by Dillon Niederhut on

The potential for harm from AI is growing with its increased usage in products and services. It's not possible for even well-intentioned companies to mitigate all the harm these systems might cause, but individuals can also use technology to shift these systems' behaviors toward their own preferences. Examples of this include adversarial patches to avoid surveillance, and data poisoning for fair loan acceptance.

Read more →