Advancing sensor tech for foggy situations

In This Story

People Mentioned in This Story
Body

Devices that rely on sensors to accurately navigate and perceive the world around them are more and more commonplace, from drones to autonomous vehicles to ground robots on rescue missions. Parth Pathak, an associate professor in the Department of Computer Science at George Mason University, is working to ensure the sensors have 20/20 vision. 

Three men stand in front of a robot vehicle
From left, Rezoan Ahmed Nazib, Parth Pathak, and Ahmad Kamari with a rescue robot that can "see" through smoke and fog. Photo provided

Pathak received $660K in funding from the Army Research Office (ARO) for this work, some of which is done in collaboration with colleagues at the University of California, Davis, where he did his post-doc. 

“Conventional sensors rely on cameras or LiDAR (light detection and ranging) to pursue objects around them, but they don't work very well when there's smoke, fog, or generally a visually degraded environment,” said Pathak. “But the mmwave wireless radar sensors that we are working on don't get affected by that. If there is dirt on the sensor, well, that's okay. They can see through things and see around things."

Imagine a rescue robot going into a building filled with smoke, trying to navigate with little to no visibility, Pathak said. "These wireless sensors can enable them to perceive the environment and even self-localize without cameras, LiDARs, or other positioning systems.” 

Another positive aspect of the devices is that while they can sense…they don’t sense too much, which is important for privacy concerns. The disadvantage, of course, is that when a sensor depicts an object such as a car, the resolution is not particularly good, and the images are “noisy.” Pathak is not just improving navigation and perception, but using multiple robots, for example, cooperatively. In a rescue mission, a swarm of robots can share their data, allowing them to collectively “see” a better picture. 

robot
Photo provided

“They can self-localize based on what they see, like how our brains work. But the robots only have wireless sensors to rely on, so part of the work is developing very good signatures of what they see from these very low resolution and noisy images,” said Pathak. “We can build 3D models of a room by scanning it through the wireless sensors and using machine learning to capture and recreate every minute detail. This is something that these sensors were never designed for. We are developing custom-tailored deep learning models of wireless sensing, essentially pushing the limits of what they can perceive using wireless signals.”

In addition to the research, ARO’s funding also supports testbed-to-prototype development and solution evaluation. 

Pathak and colleagues published this research at the Association for Computing Machinery’s ACM Mobicom conference and have submitted it to other conferences for potential publication. Two PhD students from his team, Ahmed Kamari and Rezoan Ahmed Nazib, are working actively on the project, along with three high school students who participated in prototyping over the summer as part of George Mason's Aspiring Scientists Summer Internship Program.