A mirror exposes AI’s inherent flaws in ‘Untrained Eyes’
In July 2015, Google’s public-relations machine was in full-on crisis mode. Earlier that year, the search giant announced Photos, an AI-driven app that used machine-learning to automatically tag and organize your pictures based on the people, places and things depicted in them. It was an exciting step forward, but Photos wasn’t perfect. While the app was capable of recognizing some faces, it mistook others. It would have been easy to pass this off as a routine software bug if it weren’t for the nature of the failure.
Untrained Eyes was made possible through funding from the Engadget Alternate Realities grant program, established in May 2017. It will debut, along with four other prize-winning immersive-media projects, at the Engadget Experience on November 14th, 2017. For more information about the Engadget Experience, the grant program and the grantees visit our events page, and click here to buy your ticket to the event before they run out.
In at least one case, Photos automatically tagged a black couple as gorillas. When the news went global, there was one question on everyone’s minds: How could this happen?
Google pegged the issue on the failure of its image-recognition software to adjust for obscured faces and bad lighting. Others blamed it on algorithmic bias, or the tendency for developers to let their prejudice and often limited life experiences come through in their code.
@jackyalcine skin tones and lighting, etc. We used to have a problem with people (of all races) being tagged as dogs, for similar reasons.
— (((Yonatan Zunger))) (@yonatanzunger) June 29, 2015
The concept of algorithmic bias is the focal point of Untrained Eyes, a new interactive sculpture from conceptual artist Glenn Kaino and his frequent collaborator, actor Jesse Williams (Grey’s Anatomy). When I visited Kaino’s East Side studio — one of two in Los Angeles — the project was a functioning, albeit unrefined, prototype. At the time, it was a mess of laptops, wires and household electronics, but when it’s complete it will be a monolithic sculpture of reflective glass.
From a distance, the piece looks like a long decorative mirror, but when a viewer approaches and waves at his reflection, a Kinect sensor behind the glass triggers a small home-security camera to take a picture. The picture will be uploaded to a server where facial-recognition software “matches” the viewer’s face to images in a curated database. The resulting “reflection” is basically an approximation of how the computer sees you. It’s a simple interaction that takes seconds, but the underlying message is multilayered.
Kaino and Williams wanted to reveal how something as seemingly innocuous as a Google search can expose algorithmic bias. Kaino points out that searching for “man” on Google Images surfaces page after page of white men in business suits, looking confidently into the camera, while a search for “woman” brings up a grid of white women in various stages of undress. Untrained Eyes sheds a light on issues of representation, forcing the viewer to confront how a computer, and by extension, an unknown programmer, sees them.
“One’s subjective lived experience naturally biases their work,” Kaino said. “I think it is possible, however, to create a consciousness about our general instinct to be biased and to actually create systems and put systems in place to counteract what might be negative signifiers that are created because of bias. So humans will always be biased, but if we can create a level of consciousness that bias exists, then at least we’ll have an opportunity to create systems that are just and understand the notion of what equality means.”
In a room parallel to Kaino’s workshop, a large mirrored panel sits atop a pair of sawhorses. Kinect sensors and off-the-shelf security cameras litter the space where his tech team — made up of colleagues from his time as senior VP at the Oprah Winfrey Network — refine the image-recognition and motion sensors that drive Untrained Eyes. Just one month out from its debut, it is still an assemblage of wood, wires and mirrors. It’s hard to envision what it will ultimately look like, but Kaino is unfazed. The art, to him, is in the data.
“I think, now, people make the assumption that complicated technologies operate as simple as calculators, and they don’t realize the algorithmic apparatus that it takes to actually create technologies to do things like put things up on social-media feed or calculate insurance rates and risk,” he said. “What they don’t necessarily understand is that, within that, the nuance of the engineer coder person, who’s actually ascribed their lived experience into the code, has created and embedded these assumptions deep into the fabric of the technology.”
It was important to Kaino that his team have control over the image-gathering process instead of tapping into an existing repository like Getty or Google Images, so that they could experience first-hand the creation of algorithmic bias. The team started with an image of Kaino in order to test the facial-recognition software. With that in working order, they trawled the internet for images that fit descriptions of his team members for further internal testing.
When it was clear the machine was operating the way they’d hoped, they compiled a list of attributes like hair color and length, approximate age, eye color, skin color, ethnicity and so on. In order to expand the dataset and expose the biases inherent in data collection they then outsourced image-gathering to anonymous proxies. There were no limitations on where or how the images were procured.
That anonymity and the differences in process from one image gatherer to the next allowed for imperfections in the system and an accumulation of invisible biases. He likens it to an open-source library where a well-intentioned developer could unwittingly duplicate biased code.
“The notion that we’ve been through this process and these challenges and created this imperfect dataset, from our sense, it’s great,” Kaino said. “In that case, I’d say this project is intended on being a failure of representation.”
Failures of representation feature prominently throughout Kaino and Williams’ collaborations. Kaino is a world-renowned conceptual artist whose works exploring race, power, justice, violence and technology have appeared at the Whitney Biennial, the Andy Warhol Museum and the Museum of Contemporary art in Los Angeles, among others. Meanwhile, Williams, best known for his role as Dr. Jackson Avery on Grey’s Anatomy, has used his celebrity to draw attention to racial injustice through projects like the documentary Stay Woke: The Black Lives Matter Movement.
From their GIF keyboard with a social conscience, Ebroji, to their upcoming documentary about gold-medalist Tommie Smith’s raised fist of defiance at the 1968 Olympics, the two draw on their experiences as minorities while attempting to create positive representations for their communities. According to Williams, the act of collecting the dataset for Untrained Eyes, allowed them to wrest control of representation from the usual suspects.
“We’ve relied on and pulled from everybody else’s or the most popular dataset for our whole lives,” Williams said. “That’s what got us into this mess, frankly, is letting everybody else kind of compile the default, the go-to, the blank canvas — whiteness as default, whiteness as neutral setting to build from, as the start mark. And that’s just not our reality.”
Untrained Eyes may not be a wall of whiteness like the Google Images results Kaino references, but it’s confronting nonetheless. There’s something playful about approaching a mirror, waving at yourself and having a stock photograph reflected back at you in front of an audience. Time after time during my two-day visit, I watched as members of Kaino’s team and our own crew approached the mirror.
Amidst the laughter and exposed insecurities that lingered in the air, I found myself staring at what I imagined was a Creative Commons photo titled “man with long hair.” His nose was a little smaller, his face was a bit thinner and his mane just a touch more lustrous than mine, but we could be described the same way. Still, I couldn’t accept that he looked like me.
I kept coming back to a question Kaino presented on our first day at the studio: How did it come up with that? But I know the answer. It’s simple. Someone taught it.
“I think of those machines as children in some way,” Williams said. “We’re establishing some kind of root foundation. If a 5-year-old kid says something racist or sexist or homophobic to her classmate, she got that from somewhere. She wasn’t born with that. She got that from somewhere.”