December 23, 2020
Artificial intelligence is smart, but it’s also too trusting. Currently, AI blindly trusts sensor data and decisions derived from that data.
Now, Derek Anderson, an associate professor of electrical engineering and computer science, is trying to figure out how to build AI that can more intelligently react to dynamic unknown environments.
With a four-year grant from the U.S. Army Engineer Research and Development Center (ERDC), he and EECS Associate Professor Gui DeSouza are analyzing sensor data and AI internal decision making to assess trust.
“Machines don’t currently do much introspection,” Anderson said. “As we adopt more autonomous technologies, we have to get smarter about AI knowing when somebody is trying to fool it.”
Broader implications could help improve emerging autonomous technologies like self-driving cars and drones.
Right now, for instance, those technologies can be fooled if the environment they are designed to operate within suddenly becomes altered.
“We need AI to know when something is not right,” he said. “Current state-of-the-art technology focuses on task-specific AI. This trend needs to change if we want to achieve general AI.”