AI for image recognition often picks up on features in the noise of the training data, making it vulnerable to attacks – like a sticker that prevents self-driving cars from recognizing a stop sign. What if we tell the AI which features to look at?
Archiving my Twitter, Facebook and other social network activity
AI for image recognition often picks up on features in the noise of the training data, making it vulnerable to attacks – like a sticker that prevents self-driving cars from recognizing a stop sign. What if we tell the AI which features to look at?