Artificial intelligence is learning not to be so literal

Machines that pick up on subtext and sayings could better understand humans

Artificial intelligence

CATCH MY DRIFT  Artificial intelligence that can pick up on subtext and figures of speech could better understand users than strictly literal-minded AI.

weedezign/shutterstock

HONOLULU — Artificial intelligence is starting to learn how to read between the lines.

AI systems are generally good at responding to direct statements, like “Siri, tell me the weather” or “Alexa, play ‘Despacito’.” But machines can’t yet make small talk the way humans do, says Yejin Choi, a natural language processing researcher at the University of Washington in Seattle. When it comes to conversational nuances like tone and idioms, AI still struggles to understand humans’ intent.

To help machines participate in more humanlike conversation, researchers are teaching AI to understand the meanings of words beyond their strict dictionary definitions. At the recent AAAI Conference on Artificial Intelligence, one group unveiled a system that gauges what a person really means when speaking, and another presented an AI that distinguishes between literal and figurative phrases in writing.

One key conversation skill is picking up on subtext. Someone’s facial expression or intonation can significantly change the implication of their words, says Louis-Philippe Morency, an artificial intelligence researcher at Carnegie Mellon University in Pittsburgh. Describing a movie as “sick” with a grimace conveys something totally different than calling it “sick” with an excited tone and raised eyebrows.

Morency and colleagues designed an artificial intelligence system that watched YouTube clips to learn how nonverbal cues, like facial expressions and voice pitch, can affect the meaning of spoken words.

The AI was 78 percent accurate in rating how much negative or positive sentiment a video subject expressed, Morency’s team reported January 31. The system also proved adept at distinguishing between different expressed emotions. But it recognized some emotions better than others; for instance, it identified happiness and sadness with 87.3 and 83.4 percent accuracy, respectively, but it was only 69.7 percent accurate at discerning neutral expressions. Morency next wants to test whether this kind of AI can recognize when someone’s facial expression and tone of voice are lacing their words with sarcasm.

Even in written communication, understanding someone’s intent is rarely as straightforward as stringing together the literal meanings of the words. Idioms are tricky because they can be interpreted literally or figuratively, depending on context. For example, the same wording can be used in a literal headline — “Kids playing with fire: Experts warn parents to look out for danger signs” — and a figurative one — “Playing with fire in Afghanistan.”

This kind of ambiguity can be a stumbling block for AI systems that analyze sentiments expressed online or translate documents into other languages. To get around this problem, Changsheng Liu and Rebecca Hwa, computer scientists at the University of Pittsburgh, designed a system that determines whether a phrase is meant literally or figuratively based on the surrounding words. In the case of the “playing with fire” headlines, the system might expect to see the words “kids” and “playing” together, and so be more likely to deem the first headline literal, but find the words “Afghanistan” and “playing” unrelated, and judge the second headline figurative.

This AI system learned how to associate different words by reading sentences from Wikipedia entries. In experiments, the program was 73 to 75 percent accurate in judging whether the phrases contained in sentences were literal or figurative, Hwa and Liu reported January 29.

Computers’ ability to recognize and interpret nonliteral language is becoming more important as AI becomes integrated into more aspects of our lives, says Julia Rayz, a natural language processing researcher at Purdue University in West Lafayette, Ind., not involved in the two projects. Other researchers are tackling similar problems with metaphor and irony.

“We’re starting to enter that Uncanny Valley, where [AI] will become so good that, at least in these simple conversations … it will be nearly like talking to a human,” says Robert West, a computer scientist at École Polytechnique Fédérale de Lausanne in Switzerland not involved in the projects. But understanding linguistic nuance is crucial. If AI can’t do that, “we will never have intelligent machines that will be able to survive any conversation.” 

Previously the staff writer for physical sciences at Science News, Maria Temming is the assistant managing editor at Science News Explores. She has bachelor's degrees in physics and English, and a master's in science writing.