How missing data makes it harder to measure racial bias in policing
ID’ing when officers didn’t use force against civilians is critical to quantifying bias
From 2012 to 2015, a team of researchers collected 2.9 million police officer patrol records in Chicago. The team’s analysis of that data, from nearly 7,000 officers, showed that Black police officers were less likely to arrest civilians than white police officers patrolling the same neighborhood (SN: 2/11/21). Officers arrested on average eight people per shift, with Black officers making 24 percent fewer arrests than white officers. But an alternate analysis, one that excluded shifts where no arrests occurred, flipped the results. That made it appear as if Black officers issued 12 percent more arrests than white officers.
Failing to account for events that don’t happen — police allowing a jaywalker to pass, opting not to make an arrest (usually for minor issues like possessing a small amount of drugs) or never firing a drawn gun — is problematic, says policing expert Dean Knox of the University of Pennsylvania. “Instead of drawing the conclusion that minority officers are engaging in less enforcement,” he says of his Chicago study, “you could mistakenly conclude that they are engaging in more enforcement.” The flip occurred because, compared with white officers, Black officers more often went out on patrols without issuing any arrests.
Nonevents of this nature are commonly excluded in policing data. Though a large body of evidence suggests that police in the United States discriminate against Black people, Knox says, many police departments only collect data on a smattering of the interactions between their officers and civilians. Cell phone videos, like those of Eric Garner in a chokehold and George Floyd struggling to breathe, tend to emerge only when encounters have spiraled out of control. That makes it difficult to measure racial bias in policing or come up with targeted solutions to reduce that bias.
How, though, can researchers studying policing account for nonevents? The laborious Chicago data collection by Knox and his team is not always feasible. And even that rigorous study, reported in Science earlier this year, still had gaps: The team had data on when police stopped, arrested or used force on civilians, but not on minor interactions that didn’t meet the department’s recording requirements.
When research teams accept these problematic datasets at face value, writes Knox in a November 4 essay in Science, they often arrive at contradictory conclusions. Disagreements in the literature allow public officials and the media to cherry-pick studies that support their viewpoint, whether arguing for or against implicit bias training to overcome unconscious stereotypes or prioritizing the recruitment of minority officers.
A long chain of events
Knox wrote the essay following the publication of a controversial, and now retracted, study that appeared in 2019 in the Proceedings of the National Academy of Sciences. “White officers are not more likely to shoot minority civilians than non-White officers,” the authors of that study wrote. They concluded that policies aimed at increasing police diversity would do little to stem racial disparities in police killings.
The study gained enormous traction, especially among conservative media outlets and politicians, Knox says. “This was one of the go-to pieces that people use to deny the existence of bias in policing.”
But the authors’ findings were mathematically baseless, says Knox, who along with Jonathan Mummolo, a policing expert at Princeton University, wrote an article debunking the study in Medium. Some 800 academics and researchers signed the piece. The team failed to consider total police encounters and then measure what fraction of those encounters resulted in deadly violence, Knox says.
Subscribe to Science News
Get great science journalism, from the most trusted source, delivered to your doorstep.
But that narrow focus on fatal police shootings, a rare occurrence that typically happens at the culmination of a long chain of events, ignores all potential biases earlier in the chain, Knox says. The first potential bias in a chain of events starts with an officer’s decision to approach a civilian or let them pass. Knox acknowledges that a separate layer of research is needed to account for societal level disparities, such as the presence of more officers in Black, often impoverished, neighborhoods and longstanding discriminatory practices that reduce the quality of education and other services in such neighborhoods.
“Even if you can’t see all the things that happened before, just acknowledging they exist is imperative,” Knox says.
Consider this real-life example. On July 10, 2015, Texas state trooper Brian Encinia pulled over Sandra Bland, a Black woman, for failing to signal a lane change. The exchange grew heated and culminated with Encinia arresting Bland for alleged assault. Bland, meanwhile, alleged that Encinia threw her to the ground. Both events occurred off camera. Bland’s subsequent death in a county jail caused public outcry.
Focusing solely on Bland’s arrest, and not all that happened before, would provide little information on how Bland wound up in jail for such a minor offense, or how to prevent such an outcome in the future. But because Encinia’s dashcam recorded the entire exchange, policing researchers, in this case interested in tone and language, could identify key steps leading up to her arrest. For instance, the researchers reported in Law and Society Review in 2017, Encinia’s language starts off polite but becomes increasingly agitated as Bland refuses to comply with his orders. His once formal commands, such as “step out of the car” become informal and unprofessional: “I’m going to yank you out of here.”
That word “yank” indicates that Encinia is losing control of the situation, says Belén Lowrey-Kinberg, a criminologist at St. Francis College in New York City. Previous research has shown that when officers pivot from formal to informal language, violence can follow.
While this is a case study of a single event, the study provides “a great example of how situations can escalate,” says criminologist Justin Nix of the University of Nebraska Omaha.
Fixing flawed data
Flawed police data does not need to be thrown out, Knox says. His team has developed an algorithm to account for gaps in the data at all points in a police-civilian interaction. The algorithm weights the various possible degrees of discrimination at each point in a chain of events — perhaps race did not factor into Encinia’s decision to pull Bland over because he could not see her face, for example, or maybe race played a large role because most drivers in that area are white. The range of values resulting from the summation of those events suggests the possible amounts of discrimination in any given scenario, Knox says.
The program operates on a very general principle, Knox says. “What are the data that you see?” and “What are the data that you don’t see?”
Thinking about the whole chain of events also points to how to collect better statistics.
Consider a study of police shootings by Nix and policing expert John Shjarback of Rowan University in Glassboro, N.J., that appeared November 10 in PLOS One. The researchers were interested in racial disparities in officers’ use of force against Black and white civilians. National databases include only shootings that result in a civilian’s death. But whether someone lives or dies after being shot hinges on several factors, such as proximity to a trauma center, location of the gunshot wound and access to first aid. So researchers sought to examine all police shootings, including those that resulted in injury but not death. To do so, they relied on records from four states — California, Colorado, Florida and Texas — that have collected this information for years.
The data revealed that some 45 percent of victims suffer nonfatal injuries. Factoring in the relative populations of Black and white civilians showed that for all four states, racial disparities in injuries were higher than racial disparities in fatalities. For example, from 2009 to 2014 in Florida, Black people were roughly three times more likely than whites to be shot and killed by police, but over five times more likely to be injured. Across all four states, and for reasons that are not entirely clear, Black victims are 7 percentage points less likely to die of their injuries than white victims.
National databases that only include records of civilians who die at the hands of the police underestimate officers’ use of deadly force against Black civilians, Nix says. Death “is the end of a very long sequence of events. In our paper we backed up one link in the chain.” That is, the researchers looked at all instances where officers used deadly force and not just those that resulted in death.
Knox is now working with two police departments to break down police-civilian encounters in more detail. Those departments require officers to turn on their body cameras when they believe an interaction with a civilian will rise to the level of an official interaction. (Officers have discretion at this point in the process, Knox acknowledges, so as with the Chicago study, that first link in the chain remains elusive.) Knox and his team will analyze scripts from each encounter for language and tone, such as normal voice or shouting — a quantitative version of the approach Lowrey-Kinberg used to unpack the encounter between Encinia and Bland. Computer vision techniques will parse out gestures, such as “weapon drawn.” Knox says he hopes the data will help his team get closer to reconstructing entire interactions, including identifying nonevents in any given chain.
“You don’t want just the side of the story as written by an officer,” Knox says. “You want the whole interaction.”