A new study takes aim at a central plank of the systemic-racism narrative—namely, that blacks and Hispanics get punished much more harshly than whites for the same crimes. The upshot of the study is that it found a “very small” amount of bias for drug crimes, while the results for other crime categories were “indistinguishable from statistical noise.”

The paper, written by Christopher J. Ferguson and Sven Smith and published in the journal Aggression and Violent Behavior, is a meta-analysis—a “study of studies”—that statistically combines the results of about 50 papers published since 2005, addressing outcomes such as imprisonment, sentence length, and departures from sentencing guidelines. The study is particularly valuable for explaining the current state of research. After all, it’s hard to compare apples to apples in this context: if one person gets a harsher punishment than another, it’s possible there was a good reason, or at least a nonracial one, for it. What appears to be racial discrimination might simply be the result of confounding factors that are more common in one group than another. Complicating matters further, racial bias elsewhere in the system could distort the mix of cases that arrive at the punishment stage. (Imagine, for example, that cops arrested every black suspect they saw committing some minor crime but arrested whites only if other aggravating factors were present.)

How well do existing studies account for the many variables that affect punishment? Not particularly well. As Ferguson and Smith put it, “Most studies do control for age and prior criminal record,” but other variables “such as employment status, class of defendant, attorney type (private versus public), or the presence of a cooperative victim are seldom controlled.”

These differences matter. Better-quality studies in the meta-analysis were less likely to suggest discrimination, while studies with citation bias—meaning their discussion of previous literature included nocitations of findings that conflicted with the authors’ hypothesis—produced bigger estimates of discrimination.

Understanding the paper’s nuances more thoroughly, alas, requires some technical explanation. The underlying studies address several different outcomes using different data sets and different methods. To combine them into, essentially, one humongous study, the authors convert all the results into “effect sizes” (or r) that can range from –1 to 1, with positive numbers representing discrimination against minority defendants.

Ferguson and Smith point out that effect sizes of less than 0.1 often result from various types of statistical noise, even if they’re otherwise considered statistically significant, and so they consider the combined results to meet their evidentiary standards only when they clear that bar. Statistical significance means a given result would be unlikely to arise purely by chance; the concept doesn’t address, for example, a failure to control for confounding variables, subtle problems with the underlying data, or technical foibles in setting up a statistical model.

For drug crimes, a “very small” result that meets their threshold, the effect size was about 0.13 for both anti-black and anti-Latino discrimination. For all crimes, violent crimes, property crimes, and juvenile crimes, however, the results were mostly statistically significant but ran between 0.05 and 0.1. These are far smaller effect sizes than most critics of the criminal-justice system would predict, and, given the weaknesses of the underlying research, it’s hard to be confident that they’re not just noise of some kind or another.

To play devil’s advocate, though, they arguably are big enough to warrant concern—at least if they’re real. And effect sizes can appear larger or smaller depending on how one expresses them. Effects of 0.05 to 0.13, while hardly constituting a dominant explanation for how different people fare in the justice system, can also be stated as “odds ratios” of about 1.2 to 1.6—meaning that black or Latino defendants have a 20 percent to 60 percent higher odds of a bad outcome such as incarceration, which doesn’t sound trivial. The results leave yet another impression in terms of concrete probabilities in specific examples: with an odds ratio of 1.3, for instance, a marginal white defendant with 50/50 odds of being locked up would have 65/50 odds, or about a 57 percent chance, if he were black—not a night-and-day difference, but arguably a substantial level of bias if it pervades the justice system.

When I ran this thinking by Ferguson via email, he reiterated that effects of this size often can’t be trusted and defended the paper’s metric:

Ultimately, I think the best and clearest way to communicate something is, if we’re going to try to predict an outcome, how much better than chance will we be if all we know is X (which is race in this case).  In that sense, r is the better metric and, from that, we can see that knowing the race of a defendant is not very helpful to us in predicting their outcome in the criminal justice system, accounting for something like 0.3% of the variance in outcome.

He added that, since an odds ratio of 1:1 denotes no effect at all, an odds ratio of 1.2:1 is not particularly high: “you can make it sound kind of impressive (a 20% increase in risk), but statistically it’s basically got nowhere to go but back to no correlation at all.”

Photo: Liudmila Chernetska/iStock


City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next