stuff that computers can't understand
everyone has a coffee mug from their office. but you can have a coffee mug that evades CCTV, fools computer vision, and makes machine learning models fail.
you are cutting edge. your stuff should be too.
how does it work?
we take the same cutting edge machine learning models used in academic research and perturb their inputs bit by bit until we have a good pattern. then we print the patterns on stuff.
sometimes this makes an object look like something else. sometimes this make an object look like nothing at all.
these are known as adversarial attacks. they invert the intent of the original technology, so models that were designed for surveillance can be used to create anti-surveillance tools.
want your own adversarial gear?

who are we?
we're a couple of machine learning nerds who read a paper back in 2017 about adversarial patches, and waited for someone to put them on stuff we could buy.
nobody did, so in 2020 we decided we would do it ourselves.
we're working on new things all the time, so stay tuned for more adversarial designs to add to your life.