‘Hearables’ could diagnose disease, if we let them
Poppy Crum is the Chief Scientist at Dolby labs, and no stranger to the conference circuit. Her talk at this year’s SXSW — “A Hearable Future: Sound & Sensory Interface” — promised to dive into the hidden possibilities that sound and the human ear have to offer technology. Unfortunately, and perhaps ironically, Crum’s talk was plagued by audio problems throughout (through no fault of her own).
“The Ear is this incredible hub of insight between our internal state and the external world,” Crum told the audience, before having to ask the technician to reduce the reverb on her microphone. Only moments earlier her laptop (and therefore presentation) had died thanks to a technician plugging it into the wrong outlet.
Crum handled the inconvenience deftly, taking a poignant question from an audience member asking if technology could offer her hope — she was going to be fully deaf in 10 years. Crum said it could “we want to de-stigmatize wearing hearables, we want that” before explaining her goal of helping to democratize the hearing technology space, as six companies currently owning 98 percent of the market “that’s not okay.” Technical glitches and Crum’s elegant handling of them had the audience cheering in support.
“The ear directs the eye” Crum added, talking about how situational awareness is often lead by our hearing, and not by our sight. Footsteps coming up behind us, or our ability to place a sound in 3D space long before we see what’s causing it.
Current audio wearables, or, if we must, “hearables,” are starting to take advantage of more than just delivering enhanced sound. Companies like Here, Nura Sound and Bragi (among others) have introduced sound augmentation with varied success. But most are still teetering on the edge of audio assistance — reducing background noise, or adding to the sound we already hear. Crum thinks we can do much more, and with technology that already exists.
But that advantage comes at a price. “The power of the hearable is only realized if we let the device have access and process our personal data,” Crum told Engadget after her talk. How tech firms have handled, or protected our personal data, hasn’t exactly been a success so far, so we’re forced to make the eternal choice between convenience/progress and privacy. “I think we know what to do to protect that data,” Crum added, “but it’s what we do with the understanding of that data [that’s important],” hinting the pay off could be worth it, but it’s a long road ahead.
Let’s be clear, we’re not just talking about better voice recognition, or knowing when to lower the music in our cars to calm our stressful drive. Using just our voices, scientists can predict the onset of multiple sclerosis, diabetes (through physiological changes that affect your vocal tract) and even psychosis (through vocal patterns). But do you want to let Amazon, Google or Apple be the ones to diagnose you? And have that data in its coffers? My guess is no.
“The ear is a very special place where we can gain some of the richest insight into our bodies and the external world.”
Crum’s definition of a “hearable” and the insights they can offer goes beyond wireless earbuds though. “Hearable are devices that listen, they don’t even need a transducer, they could just listen to your body.” This includes one key area of technology slowly but surely invading our living space: virtual assistants.
Right here, we have a technology that, if we let it, could listen to our daily lives, and offer up all sorts of insight: health issues, lifestyle assistance, entertainment recommendations (and enhancement) and more. But letting someone… some thing listen in on our daily lives is probably an adjustment that will take a while to earn the trust it deserves. “They might just know more about us than we know.” Crum reminds us. And that’s both horrifying and exciting at the same time.
Catch up on the latest news from SXSW 2018 right here.