A new AI training program helps robots own their ignorance | Science News

Support Science Journalism

Science News is a nonprofit.

Support us by subscribing now.


News in Brief

A new AI training program helps robots own their ignorance

More self-aware machines could avoid making dangerous mistakes

By
12:04pm, January 30, 2019
delivery bot

WATCH AND LEARN  Shadowing humans on the job could help such autonomous robots as delivery bots (one shown) and self-driving cars recognize shortcomings in their own training.

Sponsor Message

HONOLULU — A new training scheme could remind artificial intelligence programs that they aren’t know-it-alls.

AI programs that run robots, self-driving cars and other autonomous machines often train in simulated environments before making real-world debuts (SN: 12/8/18, p. 14). But situations that an AI doesn’t encounter in virtual reality can become blind spots in its real-life decision making. For instance, a delivery bot trained in a virtual cityscape with no emergency vehicles may not know that it should pause before entering a crosswalk if it hears sirens.

To create machines that err on the side of caution, computer scientist Ramya Ramakrishnan of MIT and colleagues developed a post-simulation training program in which a human demonstrator helps the AI identify gaps in its education. “This allows the [AI] to safely act in the real world,” says Ramakrishnan, whose work is being presented January 31 at the AAAI Conference on Artificial Intelligence. Engineers could also use information on AI blind spots to design better simulations in the future.

During its probationary period, the AI takes note of environmental factors influencing the human’s actions that it does not recognize from its simulation. When the human does something the AI doesn’t expect — like hesitating to enter a crosswalk despite having the right-of-way — the AI scans its surroundings for previously unknown elements, such as sirens. If the AI detects any of these features, it assumes the human is following some safety protocol it didn’t learn in the virtual world and that it should defer to the human’s judgment in these types of situations.

Ramakrishnan and colleagues have tested this setup by first training AI programs in simplistic simulations and then letting them learn their blind spots from human characters in more realistic, but still virtual, worlds. The researchers now need to test the system in the real world.   

Citations

R. Ramakrishnan et al. Overcoming blind spots in the real world: Leveraging complementary abilities for joint execution. Thirty-Third AAAI Conference on Artificial Intelligence, Honolulu, January 31, 2019.

Further Reading

M. Temming. Virtual avatars learned cartwheels and other stunts from videos of people. Science News. Vol. 194, December 8, 2018, p. 14.

M. Temming. When it comes to self-driving cars, what’s safe enough? Science News Online, November 21, 2017.

Get Science News headlines by e-mail.

More Science & the Public posts

From the Nature Index Paid Content