Engineers Explore Algorithms, Materials to Improve AI Vision
Mizzou Engineering researchers are investigating how to engineer materials to improve a machine’s ability to see and understand physical objects.
The problem is two-sided. On one hand, the team needs to figure out why artificial intelligence (AI) algorithms sometimes wrongly recognize or fail to correctly recognize objects in images. Ultimately, those algorithms control the behavior of drones and self-driving cars. On the other side, they need to determine what properties a material would need to interact ideally with those algorithms.
“We want AI to see better,” Associate Professor Derek Anderson said. “It’s like if I walked around without my glasses, everything looks fuzzy. We want to design glasses so the AI algorithm performs as well as possible. But how do you design those glasses for an AI algorithm? How do you pick the right material?”
Electrical Engineering and Computer Science researchers, along with the Army Engineer Research and Development Center, are tackling the problem from both ends. Anderson and PhD student Charlie Veal are trying to determine when and why algorithms break down. Professor Scott Kovaleski and PhD student Marshall Lindsay are studying the physical side of the equation to see whether it’s possible to create a material that helps AI’s vision or helps us understand what hinders that vision.
“If we understand what flaw or difficulty the algorithm has, we can improve it or, on the other side, try to keep the algorithm confused, depending on the objective,” Kovaleski said. “We are looking for ways to combine or shape existing materials to produce a desired effect, by incorporating the mechanisms of interaction of materials with electromagnetic waves into our algorithms.”
Developing such material could have numerous applications.
For autonomous vehicles, it could amplify some objects while essentially filtering out objects you may not want a self-driving car to see. For instance, the vehicle needs to see a person in front of it, but it should not respond to a person in an advertisement image on the side of the road.
In another example, using the new material in clothing could make it easier for autonomous cars to detect cyclists or pedestrians in poor lighting conditions.
In military applications, it could help save lives by improving vision guided drones or ground vehicles in support of troop transport, navigation and exploration.
A ’Marriage’ of AI and Materials
The team recently presented their findings on AI-enhanced Coded Apertures and Holography to the 2020 and 2021 Society of Photo-Optical Instrumentation Engineers. Just as AI can be used to design better materials, understanding the physical context (e.g., electromagnetic constraints) of objects and environments can improve the effectiveness of AI algorithms.
In digital spaces, AI is good at identifying objects in images. Classification networks such as GoogLeNet, for instance, can recognize a ball or apple in a photo. But if you alter the image slightly with a filter, AI has a harder time correctly identifying the object, even though the change is undetectable to the human eye.
That’s because AI is trained to look for specific things, Lindsay said. For instance, you can train an AI to draw a dog by showing it images of dogs, but that doesn’t mean the computer understands what a dog is.
“It’s often the case that the AI learns what makes a ‘good’ dog such as legs, a head and ears,” Lindsay said. “However, if we don’t explicitly tell it, the AI may miss out on key concepts such as a dog only having four legs. In such cases, we see AI imagine ‘dogs’ with an incorrect number of legs. Providing this physical context to an AI is essential if we want to leverage its ability to solve our problems while ensuring its decisions don’t violate any physical context.”
Foundation Based on Explainable AI
The team also presented their findings at the 2020 Institute of Electrical and Electronics Engineers (IEEE) Symposium Series on Computational Intelligence. The goal was to create a machine that can illustrate the behaviors of an AI algorithm in a way that is both controlled and understandable to humans, Veal said. His part of the research focused on the construction of the machine and evaluating AI algorithms under different conditions.
“The research project was interesting because it not only allowed us to create a machine that can identify AI behaviors (both the good and the bad), but it also established an experimental foundation that helps us understand how these different algorithms should be used—and most importantly how they should not be used,” he said. “Overall, I hope this research inspires someone else to examine the behaviors of the different AI algorithms.”
For now, though, the group is working to understand the concepts behind how physics and AI interact.
“On one hand you’ve got AI and materials that are linked with one another,” Anderson said. “If you can improve the material, you can make things easier to be found. Conversely you could make things harder to see. There’s this marriage between AI and materials that has not been appreciated enough to date. We’re exploring some basic questions about how these should be mathematically and experimentally linked.”
Want to study AI and computer vision? Explore the EECS Department at Mizzou Engineering!