Brains encode faces piece by piece

Monkey nerve cells respond to different facial features, combine data for full picture

faces

BUILD-A-FACE  Scientists showed monkeys pictures of faces (top row) while measuring the activity of a monkey’s brain cells. By adding together the information given by 205 nerve cells, researchers were able to reconstruct the faces the monkey had been shown (bottom row).

D. Tsao

View the video

A monkey’s brain builds a picture of a human face somewhat like a Mr. Potato Head — piecing it together bit by bit.

The code that a monkey’s brain uses to represent faces relies not on groups of nerve cells tuned to specific faces — as has been previously proposed — but on a population of about 200 cells that code for different sets of facial characteristics. Added together, the information contributed by each nerve cell lets the brain efficiently capture any face, researchers report June 1 in Cell.

“It’s a turning point in neuroscience — a major breakthrough,” says Rodrigo Quian Quiroga, a neuroscientist at the University of Leicester in England who wasn’t part of the work. “It’s a very simple mechanism to explain something as complex as recognizing faces.”

Until now, Quiroga says, the leading explanation for the way the primate brain recognizes faces proposed that individual nerve cells, or neurons, respond to certain types of faces (SN: 6/25/05, p. 406). A system like that might work for the few dozen people with whom you regularly interact. But accounting for all of the peripheral people encountered in a lifetime would require a lot of neurons.

It now seems that the brain might have a more efficient strategy, says Doris Tsao, a neuroscientist at Caltech.

Tsao and coauthor Le Chang used statistical analyses to identify 50 variables that accounted for the greatest differences between 200 face photos. Those variables represented somewhat complex changes in the face — for instance, the hairline rising while the face becomes wider and the eyes becomes further-set.

The researchers turned those variables into a 50-dimensional “face space,” with each face being a point and each dimension being an axis along which a set of features varied.

Then, Tsao and Chang extracted 2,000 faces from that map, each linked to specific coordinates. While projecting the faces one at a time onto a screen in front of two macaque monkeys, the team recorded the activity in single neurons in parts of the monkey’s temporal lobe known to respond specifically to faces. All together, the recordings captured activity from 205 neurons.

Each face cell was tuned to one of the 50 axes previously identified, Tsao and Chang found. The rate at which each cell sent electrical signals was proportional to a given face’s coordinate position along an axis. But a cell didn’t respond to changes in features not captured by that axis. For instance, a cell tuned to an axis where nose width and eye size changed wouldn’t respond to changes in lip shape.  

Story continues after video

FACE MORPH To figure out how the primate brain represents faces, researchers showed monkeys faces that varied along particular “axes” – suites of facial features that changed together. This video shows a face morphing in different dimensions, as an example of the ways multiple features shifted at the same time. Researchers found that the monkeys had individual neurons tuned to specific sets of facial features. L. Chang and D.Y. Tsao/Cell 2017

Adding together the features conveyed by each cell’s activity creates a picture of a complete face. And like a computer creating a full-color display by mixing different proportions of red, green and blue light, the coordinate system lets a brain paint any face in a spectrum.

“It was a total surprise,” Tsao says. Even when the faces were turned in profile, the same cells still responded to the same features.

Tsao and Chang were then able to re-create that process in reverse using an algorithm. When they plugged in the activity patterns of the 205 recorded neurons, the computer spat out an image that looked almost exactly like what they had shown the monkeys.

“People view neurons as black boxes,” says Ed Connor, a neuroscientist at Johns Hopkins University who wasn’t part of the study. “This is a striking demonstration that you can really understand what the brain is doing.”

Elsewhere in the brain, though, neurons don’t use this facial coordinate system. In 2005, Quiroga discovered individual neurons attuned to particular people in the hippocampus, a part of the brain involved in memory. He found, for instance, a single neuron that fired off messages in response to a photo of Jennifer Aniston or conceptually related images, like her name written out or a picture of her Friends costar Lisa Kudrow.

The new results fit well into that picture, Tsao and Quiroga agree. Tsao compares her system to a GPS for facial identity. “These cells are coding the coordinates. And you can use these coordinates for anything you want. You can build a specific lookup table that codes these into specific identities — like Barack Obama, or your mother.”

Quiroga’s hippocampal cells, just a few neural connections away, are like the output of that table — a sort of speed dial for people and concepts previously encountered.

The different coding strategies might be tied to differences in what these brain areas do. “When we remember things, we forget details but we remember concepts,” Quiroga says. But for telling faces apart, and especially for processing unfamiliar faces, “details are key.”