Some of you know that I spent a small slice of my career leading a team working on autonomous collision avoidance for drones (which translates to cars, trucks and tractors). One of the biggest problems using visual stimuli was fog, dust and smoke particles, etc … makes vision distorted and colors, right – see?
Stanford researchers may have a solution (Summary) … quote:
Like a comic book come to life, researchers at Stanford University have developed a kind of X-ray vision — only without the X-rays. Working with hardware similar to what enables autonomous cars to “see” the world around them, the researchers enhanced their system with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons. In tests, detailed in a paper published Sept. 9 in Nature Communications, their system successfully reconstructed shapes obscured by 1-inch-thick foam. To the human eye, it’s like seeing through walls.
And more: “A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”
How long before this is implementable commercially, your guess is as good as mine. Without this type of technology, however, expensive other mechanisms and redundant systems will be required for safely navigating dirty / clouded air (visual spaces). After this week on the west coast, without this technology who would buy autonomous vehicles without dust, smoke or water vision?