Like an A.I. acid trip, this neural net rebuilds reality with flowers and fire
When computers get creative, the results are frequently fascinating — as a new project created by artist and machine learning Ph.D. student Memo Akten admirably demonstrates. A bit like projects such as Google’s Deep Dream image generator, Akten has been applying artificial neural networks to create some unusual visual effects. His “Learning To See” project uses image recognition neural nets to interpret the images it sees on a live video feed. The twist? He trained his different neural networks exclusively on a diet of only water, sky, flowers or fire still images so that regardless of what image they actually see, they interpret it as waves crashing, fires roaring, or flowers growing.
“In some ways, this was a response to the binary polarization that we see politically in the U.K., in the United States, and in Turkey, which is where I’m from,” Akten told Digital Trends. “The idea is that all of us are only capable of seeing the world through the lens of what we’ve seen before. We incapable of seeing it through other people’s eyes because we’re so colored by what we know. In the case of this piece of work, the neural network has been trained only on certain images — such as waves or fire or flowers. As a result, everything it sees it can only make sense of based on its own experience.”
It’s an intriguing concept, both conceptually and technologically. Particularly impressive from a tech point of view is how fluid the movements look, despite the fact that Akten says the neural networks were trained exclusively on still images. Nonetheless, through analyzing only still images the A.I. has approximated a fairly accurate idea of how fires burn or water moves.
“With any emerging technology, artists will always think about how they can apply it to their own domain, whether that’s painting, dance, performance, or whatever else,” Akten continued. “Right now these machine learning technologies are still a bit complex and inaccessible for a lot of people. But there’s a lot of work being done to make these tools into things which can be used by everyone.”
- Deep learning vs. machine learning: what’s the difference between the two?
- Computers saw Jesus, graffiti, and selfies in this art, and critics were floored
- How to use Google Lens to identify objects on your Pixel smartphone
- Learn something new this year with these free online courses
- Baidu’s new A.I. can mimic your voice after listening to it for just one minute