This robot can tell when you’re about to smile — and smile back

Emo can predict a human smile 839 milliseconds before it happens

With its hairless silicone skin and blue complexion, Emo the robot looks more like a mechanical re-creation of the Blue Man Group than a regular human. Until it smiles.

In a study published March 27 in Science Robotics, researchers detail how they trained Emo to smile in sync with humans. Emo can predict a human smile 839 milliseconds before it happens and smile back.

Right now, in most humanoid robots, there’s a noticeable delay before they can smile back at a person, often because the robots are imitating a person’s face in real time. “I think a lot of people actually interacting with a social robot for the first time are disappointed by how limited it is,” says Chaona Chen, a human-robot interaction researcher at the University of Glasgow in Scotland. “Improving robots’ expression in real time is important.”

Through synced facial expressions, future iterations of robots could be sources of connection in our loneliness epidemic, says Yuhang Hu, a roboticist at Columbia University who, along with colleagues, created Emo (SN: 11/7/23).

Cameras in the robot’s eyes let it detect subtleties in human expressions that it then emulates using 26 actuators underneath its soft, blue face. To train Emo, the researchers first put it in front of a camera for a few hours. Like looking in a mirror would do for humans and their muscles, looking at itself in the camera while researchers ran random motor commands on the actuators helped Emo learn the relationships between activating actuators in its face and the expressions it created. “Then the robot knows, OK, if I want to make a smiley face, I should actuate these ‘muscles,’” Hu says.

Next, the researchers played videos of humans making facial expressions. By analyzing nearly 800 videos, Emo could learn what muscle movement indicated which expressions were about to occur. In thousands of further tests with hundreds of other videos, the robot could correctly predict what facial expression a human would make and re-create it in sync with the human more than 70 percent of the time. Beyond smiling, Emo can create expressions that involve raising the eyebrows and frowning, Hu says.

The robot’s timely smiles could relieve some of the awkward and eerie feelings that delayed reactions in robots can cause. Emo’s blue skin, too, was designed to help it avoid the uncanny valley effect (SN: 7/2/19). If people think a robot is supposed to look like a human, “then they will always find some difference or become skeptical,” Hu says. Instead, with Emo’s rubbery blue face, people can “think about it as a new species. It doesn’t have to be a real person.”

The robot has no voice right now, but integrating generative AI chatbot functionalities, like those of Chat GPT, into Emo could create even more apt reactions in the robot. Emo would be able to anticipate facial reactions from words in addition to human muscle movement. Then, the robot could respond verbally, too. First, though, Emo’s lips need some work. Current robot mouth movement often relies on the jaw to do all the talking, not the lips. “People immediately lose interest … and it’s really weird,” Hu says.

Once the robot has more realistic lips and chatbot capabilities, it could be a better companion. Having Emo as company on late nights in the robotics lab would be a welcome addition, Hu says. “Maybe when I’m working at midnight, we can complain to each other about why there’s so much work or tell a few jokes,” Hu says.

Helen Bradshaw is a spring 2024 science writing intern at Science News. She graduated from Northwestern University with a bachelor’s degree in journalism with a focus on environmental policy and culture.

More Stories from Science News on Artificial Intelligence