That turtle is a gun! MIT scientists highlight major flaw in image recognition
Why it matters to you
It may sound amusing, but this demonstration actually poses some really major security risks.
When is a rifle actually a 3D-printed turtle? When is an espresso actually a baseball? No, it’s not a case of predictive text gone massively wrong, but an alarming new piece of research from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), designed to show the limits — and potential dangers — of image recognition algorithms.
In a new paper, a team of MIT researchers were able to produce actual 3D-printed objects which could repeatedly and consistently trick neural networks, designed for image classification. This was done by slightly changing the texture of an object, thereby highlighting just how easily AI can be fooled in certain contexts. The work adds physical evidence to a theory about so-called “adversarial examples,” which can utterly baffle image recognition systems, regardless of the angle they are viewed from, by making tiny, imperceptible perturbations to inputs.
“It’s actually not just that they’re avoiding correct categorization — they’re classified as a chosen adversarial class, so we could have turned them into anything else if we had wanted to,” researcher Anish Athalye told Digital Trends. “The rifle and espresso classes were chosen uniformly at random. The adversarial examples were produced using an algorithm called Expectation Over Transformation (EOT), which is presented in our research paper. The algorithm takes in any textured 3D model, such as a turtle, and finds a way to subtly change the texture such that it confuses a given neural network into thinking the turtle is any chosen target class.”
While it might be funny to have a 3D-printed turtle recognized as a rifle, however, the researchers point out that the implications are actually pretty darn terrifying. Imagine, for instance, a security system which uses AI to flag guns or bombs, but can be tricked into thinking that they are instead tomatoes, or cups of coffee, or even entirely invisible. It also underlines frailty in the kind of image recognition systems self-driving cars will rely on, at high speed, to discern the world around them.
“Our work demonstrates that adversarial examples are a bigger problem than many people previously thought, and it shows that adversarial examples for neural networks are a real concern in the physical world,” Athalye continued. “This problem is not just an intellectual curiosity: It is a problem that needs to be solved in order for practical systems that use deep learning to be safe from attack.”
Editor’s Recommendations
- New algorithm helps turn low-resolution images into detailed photos, ‘CSI’-style
- A.I. creates some of the most realistic computer-generated images of people yet
- Fancy Phancer software gives your smartphone camera DSLR vibrancy
- You can try out this AI-created magic card trick at home
- CAPTCHAs may be a thing of the past, thanks to new machine learning research



