Carnegie Mellon taught a computer to read human body language in real time
Why it matters to you
Technology could unlock new ways to interact with machines, play games or musical instruments, or produce content.
Technology like the Microsoft Kinect camera can already carry out simple motion sensing and use this as a way of interfacing with software. However, researchers from Carnegie Mellon University have gone much further with the development of a new computer system that is capable of recognizing the body poses and movements of multiple people in real time — right down to someone’s facial expression, or the pose of an individual person’s fingers.
“The technology has the potential to unlock new ways for us to interact with machines, to play games, play musical instruments, or produce content,” Yaser Sheikh, an associate professor of robotics at Carnegie Mellon, told Digital Trends. “It will help us diagnose and treat behavioral conditions such as autism, depression, and dyslexia. It has the ability to create new monitoring systems for physical therapy and rehabilitation. It will allow us to build safer systems, such as self-driving cars and home robotics. Perhaps its most exciting potential — and the one that motivates me — is that machines would be able to enter our social spaces and become collaborative partners in our daily lives, instead of passive tools.”
While the technology was developed with the aid of a two-story dome embedded with 500 video cameras, called the Panoptic Studio, the technology can be used by anyone with a single camera and a laptop computer. With this setup, they can potentially carry out some extraordinary recognition tasks — such as monitoring every member of a sports team during a game or building self-driving car technology that might offer early warnings about pedestrians based on their body language.
The work is being presented at CVPR 2017, the Computer Vision and Pattern Recognition Conference, scheduled to take place later in July in Honolulu, Hawaii. However, the Carnegie Mellon team is not keeping it locked up as simply a research project. The team open-sourced it on Github, allowing anyone who wants to take advantage of the technology the means by which to access the code. Already, it is being used by various research groups and interest has been expressed from major commercial research and development labs — including in the automotive industry.
Sheikh does warn against misuse, though. “[As exciting as this kind of technology is, it also] tremendous negative potential in enabling broad-based surveillance and monitoring for specific behaviors,” he said. “There are still ways to go before that happens, of course, but our human community needs to start thinking about its implications.”