Stanford’s laser tech could help self-driving cars see around blind corners
By many accounts, you may be safer in a car that is driven by artificial intelligence than one that is driven by a human. But driverless cars aren’t flawless, and they can’t make good assessments about things they can’t see. Imagine how much better they might be if they could see around corners, such as spotting the child who is running out in the street just around a particularly sharp turn in the road.
Fortunately, you may not have to imagine for too much longer. That is because researchers from Stanford University have created new imaging technology for identifying objects that are out of view.
“We [have] developed an imaging technique to see objects hidden from view by treating walls as diffuse mirrors,” Matthew O’Toole, a postdoctoral fellow in computational imaging at Stanford University, told Digital Trends. “Our system shares many similarities to Lidar, a technology used by autonomous cars to detect the 3D shape of pedestrians and cars on the road. Like Lidar, we estimate shape by sending pulses of light into an environment and measuring the time required for the light to return to a sensor. Unlike Lidar, we also capture the light that scatters off a visible wall and interacts with objects hidden from view. Our algorithm uses this information to infer the 3D shape of the hidden objects.”
In addition to driverless cars, David Lindell, another researcher on the project, told us that a camera able to see around corners could be useful in search and rescue, medical imaging, and surveillance.
“To make ‘imaging around corners’ viable for real-world scenarios, we still need to shorten our procedure’s acquisition time,” O’Toole continued. “Our current prototype takes several minutes to collect enough photons to reconstruct images of objects hidden from sight. With better hardware such as a brighter laser, we believe this can be done within fractions of a second.”
Stanford isn’t the only top-flight institution to be working on this problem. Researchers from the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have also dedicated considerable time and effort to solving the same issue. However, their approach involves analyzing the way that lights reflect on the ground to predict what lurks around the corner. While neither solution yet exists in real-world autonomous vehicles, it can only be a matter of time before that changes.
A paper describing the Stanford research was recently published in the journal Nature.
- Sit back, relax, and enjoy a ride through the history of self-driving cars
- Apple Car rumor roundup: What you need to know about Project Titan
- The key to next-gen 3D vision for autonomous cars is … praying mantis goggles?
- Here’s every company developing self-driving car tech at CES 2018
- Deep learning vs. machine learning: what’s the difference between the two?