Even brain images can be biased

Study samples that are too rich and too well-educated may give a biased picture of brain development

MRI images of brains

Brain scan studies of large groups of people can tell us things about what the “average” brain looks like. But when the sample itself isn’t average, are the brains?

sfam_photo/Shutterstock

An astonishing number of things that scientists know about brains and behavior are based on small groups of highly educated, mostly white people between the ages of 18 and 21. In other words, those conclusions are based on college students.

College students make a convenient study population when you’re a researcher at a university. It makes for a biased sample, but one that’s still useful for some types of studies. It would be easy to think that for studies of, say, how the typical brain develops, a brain is just a brain, no matter who’s skull its resting in. A biased sample shouldn’t really matter, right?

Wrong. Studies heavy in rich, well-educated brains may provide a picture of brain development that’s inaccurate for the American population at large, a recent study found. The results provide a strong argument for scientists to pay more attention to who, exactly, they’re studying in their brain imaging experiments.   

It’s “a solid piece of evidence showing that those of us in neuroimaging need to do a better job thinking about our sample, where it’s coming from and who we can generalize our findings to,” says Christopher Monk, who studies psychology and neuroscience at the University of Michigan in Ann Arbor.

The new study is an example of what happens when epidemiology experiments — studies of patterns in health and disease — crash into studies of brain imaging. “In epidemiology we think about sample composition a lot,” notes Kaja LeWinn, an epidemiologist at the University of California in San Francisco. Who is in the study, where they live and what they do is crucial to finding out how disease patterns spread and what contributes to good health.  But in conversations with her colleagues in psychiatry about brain imaging, LeWinn realized they weren’t thinking very much about whose brains they were looking at. Particularly when studying healthy populations, she says, there was an idea that “a brain is a brain is a brain.”

But that’s a dangerous assumption. “The brain does not exist in a vacuum, destined to follow some predetermined developmental pathway without any deviation,” LeWinn says. “Quite the opposite, our brains, especially in early life, are exquisitely sensitive to environmental cues, and these cues shape how we develop.” She wondered whether the sampling used in brain imaging studies might affect the results scientists were seeing.

To find out, LeWinn and her colleagues turned to the Pediatric Imaging, Neurocognition and Genetics — or PING — study. “It’s probably the best study we have of pediatric brain imaging,” she says.

Conducted across eight cities (including San Diego, New York and Honolulu), the study included more than 1,000 children from ages of 3 to 20. It recorded information about the children’s genetics, mental development and emotional function. And of course, it contains lots of images of their brains. The goal was to gain a comprehensive set of data on how children’s brain develop over time.

The PING database is large, well-organized and free for any scientists to look at. LeWinn and her colleagues examined the dataset for the race, sex, parental education and household income of its participants.

The end sample of 1,162 brains was a bit more diverse than the U.S. population. According to the 2010 census, the U.S. population is about 70 percent white, 14 percent black and 7.5 percent Hispanic. By contrast, the racial breakdown of the PING study was 42 percent white, 10 percent black and 24 percent Hispanic, with a larger percentage of “other” or mixed-race participants.  

“It was more diverse. That’s not common,” LeWinn says. This could be because the study sites were in large cities with diverse populations, she notes.

The PING study participants weren’t like the average American in other ways as well. The children were from richer households than Americans in general, and their parents were more highly educated. While only 11 percent of Americans have a post-college education, 35 percent of the PING study’s children had parents who had attended graduate school.

So LeWinn and her colleagues set out to make the data in the PING study look more like the data from the U.S. population as a whole. They applied sample weights to the brain imaging data, giving more weight to the brains of kids with poorer, less educated families, and adding additional weights to match the racial demographics of the United States.

In the newly weighted data, LeWinn and her group noticed that children’s brains matured more quickly. The cortex of the brain reached a peak surface area 2.4 years earlier than the original data would have suggested.  Some brains areas — such as the amygdala, an area associated with emotional processing — appeared to reach maturity a full four years faster. “Low socioeconomic status is associated with faster brain development, so that’s one potential explanation,” LeWinn notes. The group reported their findings October 12 in Nature Communications.

Unfortunately, this study can’t tell scientists if children’s brains actually are maturing faster than we think they are. The weighted sample isn’t a representation of what average brain development looks like in the United States. Instead, it’s just closer to what it might look like. “I would like to see this replicated in an actual sample of people who do represent the population,” says Kate Mills, a cognitive neuroscientist at the University of Oregon in Eugene.

But brain development wasn’t the point. Instead, the point is to show that when there’s a bias in the sample of participants in a brain imaging study, the data are biased, too. Even a large sample may not provide an accurate picture of brain development — if that sample has biases of its own.

It’s a strong argument for an unbiased sample, no matter the type of study. “It’s illustrating the impact of sample composition on these measures,” Mills says. “It’s not something we can disregard anymore.” She’s optimistic that change is nigh. “The datasets being collected now [in brain imaging studies] are already taking this more seriously.”

But it can be difficult to get study volunteers who represent a particular population. “A representative sample is expensive and challenging,” Monk notes. For his own recent brain imaging work, Monk has teamed up with a large existing project to get a larger sample, but even then, he says, “it’s still questionable whether or not the sample can be made representative.” People may not respond to the call. Volunteers may not show up. But unless scientists put in the extra legwork to make sure those people are accounted for, our picture of how human brains work won’t apply to everyone.

Bethany was previously the staff writer at Science News for Students. She has a Ph.D. in physiology and pharmacology from Wake Forest University School of Medicine.

More Stories from Science News on Neuroscience

From the Nature Index

Paid Content