For robots, artificial intelligence gets physical

To work with humans, machines need to sense the world around them

image of robot hand touching human hand

ROBOT AWAKENING  By giving robots physical intelligence, researchers hope to build machines that can work alongside humans.

Human: cherezoff/shutterstock; Robot: Willyam Bradberry/shutterstock

View the video

In a high-ceilinged laboratory at Children’s National Health System in Washington, D.C., a gleaming white robot stitches up pig intestines.

The thin pink tissue dangles like a deflated balloon from a sturdy plastic loop. Two bulky cameras watch from above as the bot weaves green thread in and out, slowly sewing together two sections. Like an experienced human surgeon, the robot places each suture deftly, precisely — and with intelligence.

Or something close to it.

For robots, artificial intelligence means more than just “brains.” Sure, computers can learn how to recognize faces or beat humans in strategy games. But the body matters too. In humans, eyes and ears and skin pick up cues from the environment, like the glow of a campfire or the patter of falling raindrops. People use these cues to take action: to dodge a wayward spark or huddle close under an umbrella.

Part of intelligence is “walking around and picking things up and opening doors and stuff,” says Cornell computer scientist Bart Selman. It “has to do with our perception and our physical being.” For machines to function fully on their own, without humans calling the shots, getting physical is essential. Today’s robots aren’t there yet — not even close — but amping up the senses could change that.

“If we’re going to have robots in the world, in our home, interacting with us and exploring the environment, they absolutely have to have sensing,” says Stanford roboticist Mark Cutkosky. He and a group of like-minded scientists are making sensors for robotic feet and fingers and skin — and are even helping robots learn how to use their bodies, like babies first grasping how to squeeze a parent’s finger.

The goal is to build robots that can make decisions based on what they’re sensing around them — robots that can gauge the force needed to push open a door or figure out how to step carefully on a slick sidewalk. Eventually, such robots could work like humans, perhaps even caring for the elderly.

Story continues after video

FEELIN’ IT From touch to sight, robots are getting a sensory upgrade.

Video & Images: DeepMind, DARPA, V. Santos & R. Hellman/UCLA, A. Wu, Google, Boston Dynamics, Carla Schaffer/AAAS Sheikh Zayed/Institute for Pediatric Surgical Innovation, D. Hughes and N. Correll/Bioinsp & Biomimetics 2015, M. Cutkosky/Stanford University, D. Christensen; Music: Podington Bear (CC BY-NC 3.0). H. Thompson

Such machines of the future are a far cry from that shiny white surgery robot in the D.C. lab, essentially an arm atop a cart. But today’s fledgling sensing robots mark the slow awakening of machines to the world around them, and themselves.

“By adding just a little bit of awareness to the machine,” says pediatric surgeon Peter Kim of the children’s hospital, “there’s a huge amount of benefit to gain.”

Born to run

A small running robot named SAIL-R (top) has tactile sensors attached to six C-shaped legs (bottom). When the legs whirl around and smack the ground, the sensors detect the impact forces, which could help the machine choose an appropriate walking gait.A. Wu/Stanford Univ.
The pint-size machine running around Stanford’s campus doesn’t look especially self-aware.

It’s a rugged sort of robot, with stacked circuit boards and bundles of colorful wires loaded on its back. It scampers over grass, gravel, asphalt — any surface roboticist Alice Wu can find.

For weeks this summer, Wu took the traveling bot outside, placed it on the ground, and then, “I let her run,” she says. The bot isn’t that fast (its top speed is about a half a meter per second), and it doesn’t go far, but Wu is trying to give it something special: a sense of touch. Wu calls the bot SAIL-R, for Sensorized Adaptive Intelligence Legged Robot.

Fixed to each of its six C-shaped legs are tactile sensors that can tell how hard the robot hits the ground. Most robots don’t have tactile sensing on their feet, Wu says. “When I first got into this, I thought that was crazy. So much effort is focused on hands and arms.” But feet make contact with the world too.

Feeling the ground, in fact, is crucial for walking. Most people tailor their gait to different surfaces without even thinking, feet pounding the ground on a run over grass, or slowing down on a street glazed with ice. Wu wants to make robots that, like humans, sense the surface they’re on and adjust their walk accordingly.

Walking robots have already ventured out into the world: Last year, a competition sponsored by DARPA, the Department of Defense agency that funds advanced research, showcased a lineup of semiautonomous robots that walked over rubble and even climbed stairs (SN: 12/13/14, p. 16). But they didn’t do it on their own; hidden away in control rooms, human operators pulled the strings.

One day, Wu says, machines could feel the ground and learn for themselves the most efficient way to walk. But that’s a tall order. For one, researchers can’t simply glue the delicate sensors designed for a robot’s hands onto its feet. “The feet are literally whacking the sensor against the ground very, very hard,” Wu says. “It’s unforgiving contact.”

That’s the challenge with tactile sensing in general, says Cutkosky, Wu’s adviser at Stanford. Scientists have to build sensors that are tough, that can survive impact and abrasion and bending and water. It’s one reason physical intelligence has advanced so slowly, he says.

“You can’t just feed a supercomputer thousands of training examples,” Cutkosky says, the way AlphaGo learned how to play Go (SN Online: 3/15/16). “You actually have to build things that interact with the world.”

Cutkosky would know. His lab is famous for building such machines: tiny “microTugs” that can team up, antlike, to pull a car, and a gecko-inspired “Stickybot” that climbs walls. Tactile sensing could make these and other robots smarter.

Wu and colleagues presented a new sensor at IROS 2015, a meeting on intelligent robots and systems in Hamburg, Germany. The sensor, a sandwich of rubber and circuit boards, can measure adhesion forces — what a climbing robot uses to stick to walls. Theoretically, such a device could tell a bot if its feet were slipping so it could adjust its grip to hang on. And because the postage stamp–sized sensor is tough, it might actually survive life on little robot feet.

Wu has used a similar sort of sensor on an indoor, two-legged bot, the predecessor to the six-legged SAIL-R. The indoor bot can successfully distinguish between hard, slippery, grassy and sandy surfaces more than 90 percent of the time, Wu reported in IEEE Robotics and Automation Letters in July.

That could be enough to keep a bot from falling. On a patch of ice, for example, “it would say, ‘Uh-oh, this feels kind of slippery. I need to slow down to a walk,’ ” Wu says.

Ideally, Cutkosky says, robots should be covered with tactile sensors — just like human skin. But scientists are still figuring out how a machine would deal with the resulting deluge of information.

Smart skin

Even someone sitting (nearly) motionless at a desk in a quiet, temperature-controlled office is bombarded with information from the senses.

Fluorescent lights flutter, air conditioning units hum and the tactile signals are too numerous to count. Fingertips touch computer keys, feet press the floor, forearms rest on the desk. If people couldn’t tune out some of the “noise” picked up by their skin, it would be total sensory overload.

“You have millions of tactile sensors, but you don’t sit there and say, ‘OK, what’s going on with my millions of tactile sensors,’ ” says Nikolaus Correll, a roboticist at the University of Colorado Boulder. Rather, the brain gets a filtered message, more of a big-picture view.

That simplified strategy may be a winner for robotic skin, too. Instead of sending every last bit of sensing data to a centralized robotic brain, the skin should do some of the computing itself, says Correll, who made the case for such “smart” materials in Science in 2015.

A rubbery, artificial skin covers the back of a robot named Baxter. The skin contains 10 sensor nodes — vibration sensors coupled with tiny computers (inset) that let the skin detect different textures.D. Hughes and N. Correll/Bioinsp. & Biomimetics 2015
“When something interesting happens, [the skin] could report to the brain,” Correll says. Like human skin, artificial skin could take all the vibration info received from a nudge, or a tap to the shoulder, and translate it into a simpler message for the brain: “The skin could say, ‘I was tapped or rubbed or patted at this position,’ ” he says. That way, the robot’s brain doesn’t have to constantly process a flood of vibration data from the skin’s sensors.

It’s called distributed information processing. Correll and Colorado colleague Dana Hughes tested the idea with a stretchy square of rubbery skin mounted on the back of an industrial robot named Baxter. Throughout the skin, they placed 10 vibration sensors paired with 10 tiny computers. Then the team trained the computers to recognize different textures by rubbing patches of cotton, cardboard, sandpaper and other materials on the skin.

Their sensor/computer duo was able to distinguish between 15 textures about 70 percent of the time, Hughes and Correll reported in Bioinspiration & Biomimetics in 2015. And that’s with no centralized “brain” at all. That kind of touch discrimination brings the robotic skin a step closer to human skin. Making robotic parts with such sensing abilities “will make it much easier to build a dexterous, capable robot,” Correll says.

And with smart skin, robots could invest more brainpower in the big stuff, what humans begin learning at birth — how to use their own bodies.

Zip it

In UCLA’s Biomechatronics Lab, a green-fingered robot just figured out how to use its body for one seemingly simple task: closing a plastic bag.

Two deformable finger pads pinch the blue seal with steady pressure (the enclosed Cheerios barely tremble) as the robot slides its hand slowly along the plastic zipper. After about two minutes, the fingers reach the end, closing the bag. It’s deceptively difficult. The bag’s shape changes as it’s manipulated — tough for robotic fingers to grasp. It’s also transparent — not easily detectable by computer vision.

You can’t just tell the robot to move its fingertips horizontally along the zipper, says Veronica Santos, a roboticist at UCLA. She and colleague Randall Hellman, a mechanical engineer, tried that. It’s too hard to predict how the bag will bend and flex. “It’s a constant moving target,” Santos says.

So the researchers let the robot learn how to close the bag itself.

First they had the bot randomly move its fingers along the zipper, while collecting data from sensors in the fingertips — how the skin deforms, what vibrations it picks up, how fluid pressure in the fingertips changes. Santos and Hellman also taught the robot where the zipper was in relation to the finger pads. The sweet spot is smack dab in the middle, Santos says.

Then the team used a type of algorithm called reinforcement learning to teach the robot how to close the bag. “This is the exciting part,” Santos says. The program gives the robot “points” for keeping the zipper in the fingers’ sweet spot while moving along the bag.

“If good stuff happens, it gets rewarded,” Santos says. When the bot holds the zipper near the center of the finger pads, she explains, “it says, ‘Hey, I get points for that, so those are good things to do.’ ”

She and Hellman reported successful bag closing in April at the IEEE Haptics Symposium in Philadelphia. “The robot actually learned!” Santos says. And in a way that would have been hard to program.

It’s like teaching someone how to swing a tennis racket, she says. “I can tell you what you’re supposed to do, and I can tell you what it might feel like.” But to smash a ball across a net, “you’re going to have to do it and feel it yourself.”

Learning by doing may be the way to get robots to tackle all sorts of complicated tasks, or simple tasks in complicated situations. The crux is embodiment, Santos says, or the robot’s awareness that each of its actions brings an ever-shifting kaleidoscope of sensations.

Story continues after graphic

Deformable, sensing finger pads (green) help a robot figure out how to seal a plastic bag. Researchers at UCLA designed a learning algorithm that gives the robot points for keeping the seal in the center of the finger pad (green square, right).Both: V. Santos/UCLA

Smooth operator

Awareness of the sights of surgery, and what to make of them, is instrumental for a human or machine trying to stitch up soft tissue.

Skin, muscle and organs are difficult to work with, says Kim, the surgeon at Children’s National Health System. “You’re trying to operate on shiny, glistening, blood-covered tissues,” he says. “They’re different shades of pink and they’re moving around all the time.”

Surgeons adjust their actions in response to what they see: a twisting bit of tissue, for example, or a spurt of fluid. Machines typically can’t gauge their location amid slippery organs or act fast when soft tissues tear. Robots needed an easier place to start. So, in 1992, surgery bots began working on bones: rigid material that tends to stay in one place.

In 2000, the U.S. Food and Drug Administration approved the first surgery robot for soft tissue: the da Vinci Surgical System, which looks like a prehistoric version of Kim’s surgery machine. Da Vinci is about as wide as a king-sized mattress and reaches 6 feet tall in places, with three mechanical arms tipped with disposable tools. Nearby, a bulky gray cart holds two silver hand controls for human surgeons.

In the cart’s backless seat, a surgeon would lean forward into a partially enclosed pod, hands gripping controls, feet working pipe organ–like pedals. To move da Vinci’s surgical tools, the surgeon would manipulate the controls, like those claw cranes kids use to pick up stuffed animals at arcades. “It’s what we call master/slave,” Kim says. “Essentially, the robot does exactly what the surgeon does.”

Da Vinci can manipulate tiny tools and keep incisions small, but it’s basically a power tool. “It has no awareness,” Kim says, “no intelligence.” The visual inputs of surgery are processed by human brains, not a computer.

Kim’s robot is a more enlightened beast. Named STAR, for Smart Tissue Autonomous Robot, the bot has preprogrammed surgical knowledge and hefty cameras that let it see and react to the environment. Recently, STAR stitched up soft tissue in a living animal — a first for a machine. The bot even outperformed human surgeons on some measures, Kim and colleagues reported in May in Science Translational Medicine.

Severed pig intestines sewed up in the lab by STAR tended to leak less than did intestines fixed by humans using da Vinci, laparoscopic tools or sewing by hand. When researchers held the intestines under water and inflated them with air, it took nearly double the pressure for the STAR-repaired tissue to spring a leak compared with intestines patched up by humans.

Kim credits STAR’s even stitches for the win. “It’s more consistent,” he says. “That’s the secret sauce.”

To keep track of its position on tissue, STAR uses near-infrared fluorescent imaging (like night vision goggles) to follow glowing dots marked by a person. To orient itself in space, STAR uses a 3-D camera with multiple lenses.

Then the robot taps into its surgical knowledge to figure out where to place a stitch. In the experiment reported in May, humans were still in the loop: STAR would await an OK if firing a stitch in a tricky spot, and an assistant helped keep the thread from tangling (a task commonly required in human-led surgeries too). Soon, STAR may be more self-sufficient. In late November, Kim plans to test a version of his machine with two robotic arms to replace the human assistant; he would also like to give STAR a few more superhuman senses, like gauging blood flow and detecting subsurface structures, like a submarine pinging an underwater shipwreck.

One day, Kim says, such technology could essentially put a world-class surgeon in every hospital, “available anyplace, anytime.”

Santos sees a future, 10 to 20 years from now perhaps, where humans and robots collaborate seamlessly — more like coworkers than master and slave. Robots will need all of their senses to take part, she says. They might not be the artificially intelligent androids of the movies, like Ex Machina’s cunning humanoid Ava. But like humans, intelligent, autonomous machines will have to learn the limits and capabilities of their bodies. They’ll have to learn how to move through the world on their own.


This article appears in the November 12 issue of Science News with the headline, “Robot awakening: Physical intelligence makes machines aware of the world around them.”

Meghan Rosen is a staff writer who reports on the life sciences for Science News. She earned a Ph.D. in biochemistry and molecular biology with an emphasis in biotechnology from the University of California, Davis, and later graduated from the science communication program at UC Santa Cruz.