Artificial intelligence needs smart senses to be useful

eva headshotTrue intelligence, Meghan Rosen notes in this issue’s cover story “Robot awakening” (SN: 11/12/16, p. 18), lies in the body as well as the brain. And building machines with the physical intelligence that even the clumsiest human takes for granted — the ability to sense, respond to and move through the world — has long been a stumbling block for artificial intelligence research. While more sophisticated software and ultrafast computers have led to machine “brains” that can beat a person at chess or Go, building a robot that can move the pieces, fetch an iced tea or notice if the chessboard has turned into Candy Land has been difficult.

Rosen explores several examples of how roboticists are embodying smarts in their creations, crucial steps in creating the autonomous machines most of us imagine when we hear “robot.” Of course, we are already, if unwittingly, living in a world of robots. As AI researcher Richard Vaughan of Simon Fraser University in Burnaby, Canada, pointed out to me recently, once a machine becomes part of everyday life, most people stop thinking of it as a robot. “Driverless cars are robots. Your dishwasher is a robot. Drones are extremely cheap flying robots.”

In fact, Vaughan says, in the last few decades, robots’ intelligence and skills have grown dramatically. Those advances were made possible by major developments in probabilistic state estimation — which allows robots to figure out where they are and what’s going on around them — and machine learning software.

Probabilistic state estimation has enabled better integration of information from a robot’s sensors. Using the math of Bayesian reasoning, robots can compare sensor data against a model of the world, and interpret their likelihood of being right. For example, a robot in a building can use its laser sensors to assess the space around it, compare that with its inner map of the building and determine that it’s not in Hall A but has equal chances of being in Hall B or C.

Robots could do that in the 1990s. Scientists then asked a tougher question: How do you know where you are if you have no map? In two dimensions, researchers solved that by integrating sensory information with a set of all possible maps. But only recently was the problem solved in three dimensions, and challenges still remain for robots in less-structured or harsh environments.

Machine learning advances have aided aspects of AI such as computer vision, much improved by work done on boosting search engines’ ability to identify images (so you can search “birthday party” to find images of candled cakes, for example). This research has helped to make robot senses smarter.

Progress is swift, as Rosen makes clear in her story, but many challenges remain. Roboticists still struggle with hardware, especially for humanoid robots, which remain rather clunky. Walking, climbing stairs, picking things up and getting back up after a fall are still hard. Providing independent power sources is also a big deal — batteries aren’t yet good enough. But to build the robots that can do all that people want them to do, whether that’s driving us to work, helping the elderly up from a chair or collaborating safely with human workers in factories or warehouses, will take even better senses. Intelligence is not simply processing information or even learning new information. It’s also about noticing what’s going on around you and how to best respond.

More Stories from Science News on Tech

From the Nature Index

Paid Content