Few academic ideas have been as eagerly absorbed into public discourse in recent years as “implicit bias.” Embraced by a president, a would-be president, and the nation’s top law-enforcement official, the implicit-bias conceit has launched a movement to remove the concept of individual agency from the law and spawned a multimillion-dollar consulting industry. The statistical basis on which it rests is now crumbling, but don’t expect its influence to wane anytime soon.

Implicit bias purports to answer the question: Why do racial disparities persist in household income, job status, and incarceration rates, when explicit racism has, by all measures, greatly diminished over the last half-century? The reason, according to implicit-bias researchers, lies deep in our brains, outside the reach of conscious thought. We may consciously embrace racial equality, but almost all of us harbor unconscious biases favoring whites over blacks, the proponents claim. And those unconscious biases, which the implicit-bias project purports to measure scientifically, drive the discriminatory behavior that, in turn, results in racial inequality.

The need to plumb the unconscious to explain ongoing racial gaps arises for one reason: it is taboo in universities and mainstream society to acknowledge intergroup differences in interests, abilities, cultural values, or family structure that might produce socioeconomic disparities.

The implicit-bias idea burst onto the academic scene in 1998 with the rollout of a psychological instrument called the implicit association test (IAT). Created by social psychologists Anthony Greenwald and Mahzarin Banaji, with funding from the National Science Foundation and National Institute of Mental Health, the IAT was announced as a breakthrough in prejudice studies: “The pervasiveness of prejudice, affecting 90 to 95 percent of people, was demonstrated today . . . by psychologists who developed a new tool that measures the unconscious roots of prejudice,” read the press release.

The race IAT (there are non-race varieties) displays a series of black faces and white faces on a computer; the test subject must sort them quickly by race into two categories, represented by the “i” and “e” keys on the keyboard. Next, the subject sorts “good” or “positive” words like “pleasant,” and “bad” or “negative” words like “death,” into good and bad categories, represented by those same two computer keys. The sorting tasks are then intermingled: faces and words appear at random on the screen, and the test-taker has to sort them with the “i” and “e” keys. Next, the sorting protocol is reversed. If, before, a black face was to be sorted using the same key as the key for a “bad” word, now a black face is sorted with the same key as a “good” word and a white face sorted with the reverse key. If a subject takes longer sorting black faces using the computer key associated with a “good” word than he does sorting white faces using the computer key associated with a “good” word, the IAT deems the subject a bearer of implicit bias. The IAT ranks the subject’s degree of implicit bias based on the differences in milliseconds with which he accomplishes the different sorting tasks; at the end of the test, he finds out whether he has a strong, moderate, or weak “preference” for blacks or for whites. A majority of test-takers (including many blacks) are rated as showing a preference for white faces. Additional IATs sort pictures of women, the elderly, the disabled, and other purportedly disfavored groups.

Greenwald and Banaji did not pioneer such response-time studies; psychologists already used response-time methodology to measure how closely concepts are associated in memory. And the idea that automatic cognitive processes and associations help us navigate daily life is also widely accepted in psychology. But Greenwald and Banaji, now at the University of Washington and Harvard University, respectively, pushed the response-time technique and the implicit-cognition idea into charged political territory. Not only did they confidently assert that any differences in sorting times for black and white faces flow from unconscious prejudice against blacks; they also claimed that such unconscious prejudice, as measured by the IAT, predicts discriminatory behavior. It is “clearly . . . established that automatic race preference predicts discrimination,” they wrote in their 2013 bestseller Blind Spot, which popularized the IAT. And in the final link of their causal chain, they hypothesized that this unconscious predilection to discriminate is a cause of racial disparities: “It is reasonable to conclude not only that implicit bias is a cause of Black disadvantage but also that it plausibly plays a greater role than does explicit bias in explaining the discrimination that contributes to Black disadvantage.”

The implicit-bias conceit spread like wildfire. President Barack Obama denounced “unconscious” biases against minorities and females in science in 2016. NBC anchor Lester Holt asked Hillary Clinton during a September 2016 presidential debate whether “police are implicitly biased against black people.” Clinton answered: “Lester, I think implicit bias is a problem for everyone, not just police.” Then–FBI director James Comey claimed in a 2015 speech that “much research” points to the “widespread existence of unconscious bias.” “Many people in our white-majority culture,” Comey said, “react differently to a white face than a black face.” The Obama Justice Department packed off all federal law-enforcement agents to implicit-bias training. Clinton promised to help fund it for local police departments, many of which had already begun the training following the 2014 fatal police shooting of Michael Brown in Ferguson, Missouri.

A parade of journalists confessed their IAT-revealed preferences, including Malcolm Gladwell in his acclaimed book Blink. Corporate diversity trainers retooled themselves as purveyors of the new “science of bias.” And the legal academy started building the case that the concept of intentionality in the law was scientifically obtuse. Leading the charge was Jerry Kang, a UCLA law professor in the school’s critical race studies program who became UCLA’s fantastically paid vice chancellor for Equity, Diversity and Inclusion in 2015 (starting salary: $354,900, now up to $444,000). “The law has an obligation to respond to changes in scientific knowledge,” Kang said in a 2015 lecture. “Federal anti-discrimination law has been fixated on, and obsessed with, conscious intent.” But the new “behavioral realism,” as the movement to incorporate IAT-inspired concepts into the law calls itself, shows that we “discriminate without the intent and awareness to discriminate.” If we look only for conscious intent, we will “necessarily be blind to a whole bunch of real harm that is painful and consequential,” he concluded. Kang has pitched behavioral realism to law firms, corporations, judges, and government agencies.

“The Obama Justice Department packed off all federal law-enforcement agents to implicit-bias training.”

A battle is under way regarding the admissibility of IAT research in employment-discrimination lawsuits: plaintiffs’ attorneys regularly offer Anthony Greenwald as an expert witness; the defense tries to disqualify him. Greenwald has survived some defense challenges but has lost others. Kang is philosophical: “It might not matter if Tony’s expert testimony is kicked out now,” he said in his 2015 lecture—in ten years, everyone will know that our brains harbor hidden biases. And if that alleged knowledge becomes legally actionable, then every personnel decision can be challenged as the product of implicit bias. The only way to guarantee equality of opportunity would be to mandate equality of result through quotas, observes the University of Pennsylvania’s Philip Tetlock, a critic of the most sweeping IAT claims.

The potential reach of the behavioral-realism movement, which George Soros’s Open Society Foundation is underwriting, goes far beyond employment-discrimination litigation. Some employers are using the IAT to screen potential workers, diversity consultant Howard Ross says. More and more college administrations require members of faculty-search committees to take the IAT to confront their hidden biases against minority and female candidates. Promotion committees at many corporations undergo the IAT. UCLA law school strongly encourages incoming law students to take the test to confront their implicit prejudice against fellow students; the University of Virginia might incorporate the IAT into its curriculum. Kang has argued for FCC regulation of how the news media portray minorities, to lessen implicit prejudice. If threats to fair treatment “lie in every mind,” as Kang and Banaji argued in a 2006 California Law Review article, then the scope for government intervention in private transactions to overcome those threats is almost limitless.

But though proponents refer to IAT research as “science”—or, in Kang’s words, “remarkable,” “jaw-dropping” science—their claims about its social significance leapfrogged ahead of scientific validation. There is hardly an aspect of IAT doctrine that is not now under methodological challenge.

Any social-psychological instrument must pass two tests to be considered accurate: reliability and validity. A psychological instrument is reliable if the same test subject, taking the test at different times, achieves roughly the same score each time. But IAT bias scores have a lower rate of consistency than is deemed acceptable for use in the real world—a subject could be rated with a high degree of implicit bias on one taking of the IAT and a low or moderate degree the next time around. A recent estimate puts the reliability of the race IAT at half of what is considered usable. No evidence exists, in other words, that the IAT reliably measures anything stable in the test-taker.

But the fiercest disputes concern the IAT’s validity. A psychological instrument is deemed “valid” if it actually measures what it claims to be measuring—in this case, implicit bias and, by extension, discriminatory behavior. If the IAT were valid, a high implicit-bias score would predict discriminatory behavior, as Greenwald and Banaji asserted from the start. It turns out, however, that IAT scores have almost no connection to what ludicrously counts as “discriminatory behavior” in IAT research—trivial nuances of body language during a mock interview in a college psychology laboratory, say, or a hypothetical choice to donate to children in Colombian, rather than South African, slums. Oceans of ink have been spilled debating the statistical strength of the correlation between IAT scores and lab-induced “discriminatory behavior” on the part of college students paid to take the test. The actual content of those “discriminatory behaviors” gets mentioned only in passing, if at all, and no one notes how remote those behaviors are from the discrimination that we should be worried about.

Even if we accept at face value that the placement of one’s chair in a mock lab interview or decisions in a prisoner’s-dilemma game are significant “discriminatory behaviors,” the statistical connection between IAT scores and those actions is negligible. A 2009 meta-analysis of 122 IAT studies by Greenwald, Banaji, and two management professors found that IAT scores accounted for only 5.5 percent of the variation in laboratory-induced “discrimination.” Even that low score was arrived at by questionable methods, as Jesse Singal discussed in a masterful review of the IAT literature in New York. A team of IAT skeptics—Fred Oswald of Rice University, Gregory Mitchell of the University of Virginia law school, Hart Blanton of the University of Connecticut, James Jaccard of New York University, and Philip Tetlock—noticed that Greenwald and his coauthors had counted opposite behaviors as validating the IAT. If test subjects scored high on implicit bias via the IAT but demonstrated better behavior toward out-group members (such as blacks) than toward in-group members, that was a validation of the IAT on the theory that the subjects were overcompensating for their implicit bias. But studies that found a correlation between a high implicit-bias score and discriminatory behavior toward out-group members also validated the IAT. In other words: heads, I win; tails, I win.

Academic research has convinced companies that implicit bias plays a role in their hiring practices, but the firms can provide few examples of qualified minority candidates being denied a job. (JIM WEST/ALAMY STOCK PHOTO)
Academic research has convinced companies that implicit bias plays a role in their hiring practices, but the firms can provide few examples of qualified minority candidates being denied a job. (JIM WEST/ALAMY STOCK PHOTO)

Greenwald and Banaji now admit that the IAT does not predict biased behavior. The psychometric problems associated with the race IAT “render [it] problematic to use to classify persons as likely to engage in discrimination,” they wrote in 2015, just two years after their sweeping claims in Blind Spot. The IAT should not be used, for example, to select a bias-free jury, maintains Greenwald. “We do not regard the IAT as diagnosing something that inevitably results in racist or prejudicial behavior,” he told The Chronicle of Higher Education in January. Their fallback position: though the IAT does not predict individual biased behavior, it predicts discrimination and disadvantage in the aggregate. “Statistically small effects” can have “societally large effects,” they have argued. If a society has higher levels of implicit bias against blacks as measured on the IAT, it will allegedly have higher levels of discriminatory behavior. Hart Blanton, one of the skeptics, dismisses this argument. If you don’t know what an instrument means on an individual level, you don’t know what it means in the aggregate, he told New York’s Singal. In fairness to Greenwald and Banaji, it is true that a cholesterol score, say, is more accurate at predicting heart attacks the larger the sample of subjects. But too much debate exists about what the IAT actually measures for much confidence about large-scale effects.

Initially, most of the psychology profession accepted the startling claim that one’s predilection to discriminate in real life is revealed by the microsecond speed with which one sorts images. But possible alternative meanings of a “pro-white” IAT score are now beginning to emerge. Older test-takers may have cognitive difficulty with the shifting instructions of the IAT. Objective correlations between group membership and socioeconomic outcomes may lead to differences in sorting times, as could greater familiarity with one ethnic-racial group compared with another. These alternative meanings should have been ruled out before the world learned that a new “scientific” test had revealed the ubiquity of prejudice.

The most recent meta-analysis deals another blow to the conventional IAT narrative. This study, not yet formally published, looked at whether changes in implicit bias allegedly measured by the IAT led to changes in “discriminatory behavior”—defined as the usual artificial lab conduct. While small changes in IAT scores can be induced in a lab setting through various psychological priming techniques, they do not produce changes in behavior, the study found. The analyses’ seven authors propose a radical possibility that would halt the implicit-bias crusade in its tracks: “perhaps automatically retrieved associations really are causally inert”—that is, they have no relationship to how we act in the real world. Instead of “acting as a ‘cognitive monster’ that inevitably leads to bias-consistent thought and behavior,” the researchers propose, “automatically retrieved associations could reflect the residual ‘scar’ of concepts that are frequently paired together within the social environment.” If this is true, they write, there would need to be a “reevaluation of some of the central assumptions that drive implicit bias research.” That is an understatement.

Among the study’s authors are Brian Nosek of the University of Virginia and Calvin Lai of Washington University in St. Louis. Both have collaborated with Greenwald and Banaji in furthering the dominant IAT narrative; Nosek was Banaji’s student and helped put the IAT on the web. It is a testament to their scientific integrity that they have gone where the data have led them. (Greenwald warned me in advance about their meta-analysis: “There has been a recent rash of popular press critique based on a privately circulated ‘research report’ that has not been accepted by any journal, and has been heavily criticized by editor and reviewers of the one journal to which I know it was submitted,” he wrote in an e-mail. But the Nosek, Lai, et al. study was not “privately circulated”; it is available on the web, as part of the open-science initiative that Nosek helped found.)

The fractious debate around the IAT has been carried out exclusively at the micro-level, with hundreds of articles burrowing deep into complicated statistical models to assess minute differences in experimental reaction times. Meanwhile, outside the purview of these debates, two salient features of the world go unnoticed by the participants: the pervasiveness of racial preferences and the behavior that lies behind socioeconomic disparities.

One would have difficulty finding an elite institution today that does not pressure its managers to hire and promote as many blacks and Hispanics as possible. Nearly 90 percent of Fortune 500 companies have some sort of diversity infrastructure, according to Howard Ross. The federal Equal Employment Opportunity Commission requires every business with 100 or more employees to report the racial composition of its workforce. Employers know that empty boxes for blacks and other “underrepresented minorities” can trigger governmental review. Some companies tie manager compensation to the achievement of “diversity,” as Roger Clegg documented before the U.S. Civil Rights Commission in 2006. “If people miss their diversity and inclusion goals, it hurts their bonuses,” the CEO of Abbott Laboratories said in a 2002 interview. Since then, the diversity pressure has only intensified. Google’s “objectives and key results” for managers include increased diversity. Walmart and other big corporations require law firms to put minority attorneys on the legal teams that represent them. “We are terminating a firm right now strictly because of their inability to grasp our diversity expectations,” Walmart’s general counsel announced in 2005. Any reporter seeking a surefire story idea can propose tallying up the minorities in a particular firm or profession; Silicon Valley has become the favorite subject of bean-counting “exposés,” though Hollywood and the entertainment industry are also targets of choice. Organizations will do everything possible to avoid such negative publicity.

In colleges, the mandate to hire more minority (and female) candidates hangs over almost all faculty recruiting. (Asians don’t count as a “minority” or a “person of color” for academic diversity purposes, since they are academically competitive.) Deans have canceled faculty-search results and ordered the hiring committee to go back to the drawing board if the finalists are not sufficiently “diverse.” (See “Multiculti U,” Spring 2013.) Every selective college today admits black and Hispanic students with much weaker academic qualifications than white and Asian students, as any high school senior knows. At the University of Michigan, for example, an Asian with the same GPA and SAT scores as the median black admit had zero chance in 2005 of admission; a white with those same scores had a 1 percent chance of admission. At Arizona State University, a white with the same academic credentials as the average black admit had a 2 percent chance of admission in 2006; that average black had a 96 percent chance of admission. The preferences continue into graduate and professional schools. UCLA and UC Berkeley law schools admit blacks at a 400 percent higher rate than can be explained on race-neutral grounds, though California law in theory bans them from using racial preferences. From 2013 to 2016, medical schools nationally admitted 57 percent of black applicants with low MCATs of 24 to 26 but only 8 percent of whites and 6 percent of Asians with those same low scores, as Frederick Lynch reported in the New York Times. The reason for these racial preferences is administrators’ burning desire to engineer a campus with a “critical mass” of black and Hispanic faces.

Similar pressures exist in the government and nonprofit sectors. In the New York Police Department, blacks and Hispanics are promoted ahead of whites for every position to which promotion is discretionary, as opposed to being determined by an objective exam. In the 1990s, blacks and Hispanics became detectives almost five years earlier than whites and took half the time as whites did to be appointed to deputy inspector or deputy chief.

And yet, we are to believe that alleged millisecond associations between blacks and negative terms are a more powerful determinant of who gets admitted, hired, and promoted than these often explicit and heavy-handed preferences. If a competitively qualified black female PhD in computer engineering walks into Google, say, we are to believe that a recruiter will unconsciously find reasons not to hire her, so as to bring on an inferior white male. The scenario is preposterous on its face—in fact, such a candidate would be snapped up in an instant by every tech firm and academic department across the country. The same is true for competitively qualified black lawyers, accountants, and portfolio managers.

“In colleges, the mandate to hire more minorities (and females) hangs over almost all faculty recruiting.”

If such discrimination is so ubiquitous, there should be victims aplenty that the proponents of implicit bias can point to. They cannot.

I twice asked Anthony Greenwald via e-mail if he was aware of qualified candidates in faculty searches anywhere who were overlooked or rejected because of skin color. He ignored the question. I twice asked Jerry Kang’s special assistant for Equity, Diversity and Inclusion via e-mail if Vice Chancellor Kang was aware of faculty candidates for hire or promotion at UCLA or elsewhere who were overlooked because of implicit bias. Kang’s assistant ignored the question. Howard Ross has been a prominent corporate diversity trainer for 30 years, with clients that include hundreds of Fortune 500 companies, Harvard and Stanford medical schools, and two dozen other colleges and universities. I asked him in a phone interview if he was aware of the most qualified candidate for a business or academic position not getting hired or promoted because of bias. Ross merely said that there was a “ton of research that demonstrates that it happens all the time,” without providing examples.

PricewaterhouseCoopers has spearheaded an economy-wide diversity initiative, dubbed the CEO Action for Diversity & Inclusion™. Nearly 200 CEOs have signed a pledge to send their employees to implicit-bias training; in the case of PricewaterhouseCoopers, that means packing off 50,000 employees to the trainers. Any organization spending a large sum of money on a problem would presumably have a firm evidentiary basis that the problem exists. Megan DiSciullo is a spokesman for the CEO Action for Diversity & Inclusion and a member of PricewaterhouseCoopers’s human resources department. I asked her if she was aware of candidates who should have been hired at PwC but weren’t because of implicit bias. Our telephone exchange went as follows:

DiSciullo: I’m not aware of someone not getting a job because of bias.

Me: But are your managers making suboptimal decisions because of bias?

DiSciullo: The coalition as a group recognizes that everyone has unconscious bias; we are committed to training our managers to be better.

Me: Your managers are not making optimal decisions because of bias?

DiSciullo: Everyone has unconscious bias. I’m not saying that anyone is not being hired or promoted, but it’s part of the workplace.

Me: In what way? People are being treated differently?

DiSciullo: People have bias, but it manifests itself differently. I think you have an agenda which I am trying to unpack. The facts are clear that people have biases and that they could bring them to the workplace. Corporations recognize that fact and want to build the most inclusive workplace.

Me: You base the statement that everyone has biases on what?

DiSciullo: On science and on the Harvard Business Review.

Other signatories to the CEO Action for Diversity & Inclusion include Cisco, Qualcomm, KPMG, Accenture, HP, Procter & Gamble, and New York Life, several of which are on the steering committee. These companies either failed to respond to preliminary requests for an interview about the CEO Action for Diversity & Inclusion or went silent when asked if they knew of implicit bias infecting hiring and promotion decisions. Obviously, such reticence may be motivated by a fear of litigation. But it is also likely that there are no known victims of implicit bias.

The insistence that implicit bias routinely denies competitively qualified minority candidates jobs and promotions also requires overlooking the relentless pressure to take race into account in employment and admissions decisions. I asked Greenwald if implicit bias overrides these institutional pressures to hire and promote by race. He evaded the question. “ ‘Override’ is the wrong word,” he wrote back. “Implicit biases function as filters on perception and judgment, operating outside of awareness and often rendering perception and judgment invalid.” In response to a follow-up question, he denied that those institutional pressures were all that strong, as evidenced by the fact that many diversity programs produced no “beneficial effect.” Another explanation for the persistent lack of proportional representation in the workplace, however, is that there are not proportional numbers of qualified minorities in the hiring pipeline.

Diversity trainers invoke behavioral economics to explain why explicit diversity mandates don’t override implicit bias. This field, popularized by the work of cognitive psychologist Daniel Kahneman, has shown that people often fail to use information in rational ways. “We now know that most decisions are visceral and emotional,” said Ross, in response to my incredulity that a college physics department would not leap at a competitively qualified black PhD candidate. Noelle Emerson, a high-profile diversity trainer in Silicon Valley, claims that because companies are “not purely rational actors,” they will as a group discriminate against the most qualified candidate. “People will be left out of entire industries,” she said. “People from stereotyped groups have a harder time getting hired and promoted.”

But incentives can overcome the flaws in rational analysis identified by behavioral economics. The incentive for race-conscious employment decisions is so strong that the burden of proof is on those who maintain that implicit bias will override it. The fact is that blacks on the academic market and in many other fields enjoy a huge hiring advantage.

Yet they are still not proportionally represented in the workplace, despite decades of trying to engineer “diversity.” You can read through hundreds of implicit-bias studies and never come across the primary reason: the academic skills gap. Given the gap’s size, anything resembling proportional representation can be achieved only through massive hiring preferences.

From 1996 to 2015, the average difference between the mean black score on the math SAT and the mean white score was 0.92 standard deviation, reports a February 2017 Brookings Institution study. The average black score on the math SAT was 428 in 2015; the average white score was 534, and the average Asian score was 598. The racial gaps were particularly great at the tails of the distribution. Among top scorers—those scoring between 750 and 800—60 percent were Asian, 33 percent were white, and 2 percent were black. At the lowest end—scores between 300 and 350—6 percent were Asian, 21 percent were white, and 35 percent were black. If the SATs were redesigned to increase score variance—that is, to spread out the scores across a greater range by adding more hard questions and more easy questions—the racial gaps would widen.

The usual poverty explanations for the SAT gap don’t hold up. In 1997, white students from households with incomes of $10,000 or less scored better than black students from households with incomes of $80,000 to $100,000. At the University of California, race predicts SAT scores better than class.

Proponents of racial preferences routinely claim that the SATs are culturally biased and do not measure actual cognitive skills. If that were the case, blacks would do better in college than their SAT scores would predict. In fact, blacks do worse. Further, the math test is not amenable to the “cultural-bias” criticism (unless one believes that math is itself biased). Low scores reflect an actual difficulty with math. Fifty-four percent of black elementary and high school students in California, for example, do not meet the state’s math standards, compared with 21 percent of white students and 11 percent of Asian students. The chancellor of the California Community Colleges system proposed in July 2017 that intermediate algebra be removed from graduation requirements for associate’s degrees because blacks and Hispanics have such a hard time passing the course. Math difficulties are the greatest reason that, in California, only 35 percent of black students earn their associate’s degrees, compared with 54 percent of whites and 65 percent of Asians.

The math SAT and algebra require abstract quantitative reasoning. The math achievement gap will most affect hiring in fields with advanced quantitative requirements. In 2016, 1 percent of all PhDs in computer science went to blacks, or 17 out of 1,659 PhDs, according to the Computing Research Association’s annual Taulbee Survey. Three blacks received a PhD in computer engineering, or 3.4 percent of the total. Blacks earned 0.7 percent of master’s degrees in computer science and 3 percent of undergraduate degrees in computer science. Yet the biggest Silicon Valley firms are wedded to the idea that their own implicit bias is responsible for the racial (and gender) composition of their workforce. A member of Google’s “People Analytics” (i.e., HR) department, Brian Welle, lectures widely about implicit bias and the IAT; Google declined to let me interview him or a People Analytics colleague. (In August 2017, Google’s CEO fired James Damore, a computer engineer, for questioning the assumptions behind the company’s implicit-bias training, especially regarding gender.)

A host of other professions beyond the sciences draw on the analytic skills required by algebra and the math SAT. Business management and consulting, for example, call for logic and conceptual flexibility. Anyone in medicine, including nursing, should be able to master basic algebra. These professions should not be tainted with the implicit-bias charge when they are hiring from the same finite pool of competitively qualified blacks.

The SAT’s verbal sections show the same 100-point test-score gap between whites and blacks as the math section. Pace the critics, that is not an artifact of cultural bias: the average black 12th-grader reads at the level of the average white eighth-grader. In California, 44 percent of black students through the high school grades do not meet state standards in English language arts and literacy, compared with 16 percent of white students and 11 percent of Asian students.

Like the SAT, the LSAT also measures reading comprehension and verbal reasoning. It has a greater test-score gap than the SAT: 1.06 standard deviations between average black and white scores in 2014. If the LSAT test-score gap were the result of cultural bias, the LSAT would under-predict black performance in law school. It does not. The majority of black law students cluster in the bottom tenth of their class, thanks to racial preferences in admissions. The median black law school GPA is at the 6th percentile of the median white GPA, meaning that 94 percent of whites do better than the median black. This achievement gap cannot be chalked up to implicit bias on the part of law school professors. The overwhelming majority of law school exams are still graded blind, meaning that the identity of the test-taker is concealed from the grader. The bar exam is also graded blind. If blacks were discriminated against in law school by professors, they should do better on the bar exam than their GPAs would predict. They do not. A study by the Law School Admissions Council found that 22 percent of black test-takers never pass the bar examination after five attempts, compared with 3 percent of white test-takers. Yet the relatively low number of blacks among law-firm partners is routinely attributed—by the firms themselves—to hiring and promotion committee bias. In fact, corporate law firms hire blacks at rates that exceed their representation among law school graduates. But because the preferences in their favor are so large—the law school GPAs of black associates are at least a standard deviation below those of white associates—black attrition from corporate firms is high. By the time the partnership decision rolls around, few black associates remain at their firms to be promoted, as UCLA law professor Richard Sander has shown.

Implicit-bias researchers do not discuss the cognitive skills gap. I asked Greenwald if gaps in academic preparedness should also be considered in explaining socioeconomic disparities. He responded simply by offering up more wellsprings of bias: “There are sources of unintended disparities other than implicit bias (esp. institutional discrimination and in-group favoritism).” But a 2014 study for the Federal Reserve Bank of Chicago by economist Bhashkar Mazumder found that differences in cognitive skills measured by the Armed Forces Qualification Test account for most of the black–white difference in intergenerational mobility. Blacks and whites with the same score on the AFQT have similar rates of upward and downward mobility. The AFQT should over-predict upward mobility for blacks if bias were holding them back; it does not.

“The iron grip of the implicit-bias concept on the corporate world will merely result in a loss of efficiency.”

The iron grip of the implicit-bias concept on the corporate world will merely result in a loss of efficiency as workers are again trundled off to this latest iteration of diversity training and are further pressured to take race into account in personnel decisions. Most ominously for productivity, signatories to the CEO Action for Diversity & Inclusion have pledged to encourage more conversations among their employees about race, even though a recent report found that 70 percent of employees are not comfortable discussing race relations at work—understandably, given the potential tensions created by diversity preferences and the oversaturation of race talk in American life. Procter & Gamble is on the steering committee of the CEO Action for Diversity & Inclusion. You would think that its managers would have better things to do than lead bull sessions about racial microaggressions (alleged racial slights too small for ordinary detection), in light of the company’s lackluster growth over the last decade and the ongoing fight for control of its board.

But it is in law enforcement that the mania for implicit-bias training exacts its most serious cost. Police officers unquestionably need more hands-on tactical training to avoid ending up in a position that requires the use of force. Officers need tools for keeping their cool in highly charged, hostile encounters. They should practice de-escalating confrontations and gaining voluntary compliance. Some officers pay out of their own pocket for tactical training, since their departments offer too little of it. But now there will be less time and departmental money available for the necessary skills upgrades because precious training resources are being diverted to the implicit-bias industry. And that wasteful training is being carried out in the name of a problem that does not even exist: bias-driven police killings of black men.

Joshua Correll, a psychologist at the University of Colorado, has been studying police shoot/ don’t shoot decisions for years. His experiments require officers to react to rapidly changing images of potential targets on a computer screen. He has found that officers are no more likely to shoot an unarmed black target than an unarmed white one. Officers are slightly quicker to identify an armed black target as armed than an armed white target, and slower to identify an unarmed black target as unarmed than an unarmed white target. But the faster cognitive processing speeds for stereotype-congruent targets (i.e., armed blacks and unarmed whites) do not result in officers shooting unarmed black targets at a higher rate than unarmed white ones.

Correll’s conclusions were confirmed in 2016 with the release of four studies that found either no antiblack bias in police shootings or a bias that favored blacks. Three of the studies—by Roland Fryer, Ted Miller, and the Center for Policing Equity—reviewed data on actual police use of force; a fourth put officers in a more sophisticated life-size video simulator than the computers that Correll uses. That study, led by the University of Washington’s Lois James, found that officers waited significantly longer before shooting an armed black target than an armed white target and were three times less likely to shoot an unarmed black target than an unarmed white target. James hypothesized that officers were second-guessing themselves when confronting black suspects because of the current climate around race and policing.

Both experimental and data-based research, in other words, dispel the claim that police officers are killing blacks out of implicit bias. That has not stopped the implicit-bias juggernaut, however. Police departments across the country are subjecting their officers to implicit-bias training at considerable cost; any controversial shooting invariably triggers a pledge to bring in the bias consultants. The New York Police Department next year will start requiring recruits and officers already on the job to attend a full-day seminar in implicit bias, time that could be better spent practicing tactical and communication skills.

Crime, Unfiltered

Harvard’s Project Implicit website, which publicly administers the IAT, offers an optional questionnaire before the race test, designed to measure explicit racial attitudes. The questionnaire instead demonstrates the worldview of bias researchers. Agreeing with such statements as: “Most big corporations are interested in treating their black and white employees equally,” “Black people should take the jobs that are available and then work their way up to better jobs,” or “Many black teenagers do not respect themselves or anyone else” will undoubtedly earn you an F in tolerance and understanding. (The project managers have not yet revealed survey results.) The statement about black teenagers, at the very least, is fully supported by empirical crime data and individual instances of youth crime, culled from a random handful of cities over the last half-year.

In late August 2017, for example, a group of four black teens and young adults went on an armed-robbery rampage in Chicago and Indiana, targeting the elderly in particular. On August 24, two of the robbers, in hoodies, jumped out of an SUV and demanded that a 73-year-old man, strolling in his Southwest Side Chicago neighborhood, turn over his wallet, phone, and keys. He refused and was shot in the abdomen, reported the Chicago Tribune. Days before, within a few minutes’ time, the same group had tried to rob an 85-year-old man and a 67-year-old man in Indiana. Shortly thereafter, they robbed a 33-year-old woman walking with her 11-year-old daughter. The group is suspected of up to 20 armed robberies.

In Baltimore in June 2017, a 37-year-old mother of eight called the police after someone threatened her son during a dispute over a stolen bike seat. After the cops left, 18-year-old Darius Neal returned and shot her dead in front of her children. Also that month, four 16-year-olds beat a city commissioner in downtown Baltimore and stole his two phones and wallet. The commissioner runs antiviolence programs for the city.

Baltimore prosecutors charged a 17-year-old with a triple shooting over the summer, part of a wave of gun violence that left 11 local juveniles dead and 25 wounded through the first eight months of 2017.

In early June 2017, NYPD officers responded to a report of gunfire at a party in East Flatbush, Brooklyn. Officer Dalsh Vere was questioning a 15-year-old in a stolen Honda at 11:50 PM when the 15-year-old floored the accelerator and dragged Vere two blocks. Vere was left in a coma with brain trauma. The boy had prior convictions for possession of stolen property, menacing, and burglary. His teen passengers were charged with hindering prosecution.

At least three times from April to June, groups of turnstile-jumping teens—in one case, numbering up to 60—beat and robbed passengers on the Bay Area Rapid Transit system. A similar group attacked a carnival near the Oakland Coliseum, pummeling workers and stealing prizes from the game booths. BART managers refused to release video from the incidents because doing so would allegedly perpetuate false racial stereotypes.

In March, more than 100 teens marauded through downtown Philadelphia, beating and tasing people, jumping on the hoods of cars, and running through traffic. A 19-year-old shot two Miami officers ambush-style as they were sitting in an unmarked van in a housing project. One of the officers had previously arrested the gunman on a weapons charge. And two Chicago boys, 14 and 15, participated in a gang rape of a 15-year-old girl that was filmed and broadcast live on Facebook. The victim’s family has subsequently been harassed by local children; a group of girls beat up the victim’s 12-year-old sister in retaliation for the victim’s having reported the assault. The attackers belong to a group of 35 to 50 teenage boys who have been terrorizing the elderly and children in the neighborhood, reports the Chicago Tribune.

In late February, a mob of teens in Baltimore surrounded a woman and yanked her by her ponytail to the ground, hit a homeless woman in the face, and punched a couple. A few days earlier, a teen mob attacked a man and stole his phone. The previous week, about 15 youths knocked a woman down at the University of Maryland and stole her phone.

All the IAT-inspired lecturing cannot change the reality that drives police activity: the incidence of crime. And that is a topic about which implicit-bias trainers have little to say, as I discovered while observing a three-day training program in Chesterfield, Missouri, in May 2016.

About three dozen officers and supervisors had come to this green suburb of St. Louis from as far away as Montana, Virginia, North Carolina, Michigan, and Kentucky for a “train-the-trainer” session offered by the premier antibias outfit in the field. Lori Fridell has been lecturing to police departments about bias-based policing since the “driving while black” notion emerged in the 1990s. But the implicit-bias idea has boosted her business enormously, as has the Black Lives Matter movement, jump-started by the Michael Brown shooting in nearby Ferguson in 2014. In 2016, Fridell was fielding a call a day from police departments, courts, and other parts of the criminal-justice system. The Obama Justice Department funded her organization’s implicit-bias trainings for police departments that it considered particularly troubled. Other agencies pay their own way.

A day and a half into the three-day Chesterfield training, the attendees had been informed that the Brown shooting was a function of implicit bias (even though Brown had tried to grab the officer’s gun and had assaulted him) and that the overrepresentation of blacks in prison was because blacks get longer sentences than whites for the same crime (in fact, sentences are equal, once criminal history is taken into account). The attendees had learned about the IAT; they had watched a video of singer Susan Boyle’s victory in the television show Britain’s Got Talent; they had viewed photos of a hot babe on a motorcycle and a female executive with a briefcase; they had written down stereotypes about the “unhoused”—not activities directly related, say, to serving a felony warrant safely. The theme of these exercises was that everyone carries around stereotypes, and that to be human is to be biased. In the case of police officers, the two trainers explained, those biases could put an officer’s life in jeopardy if he discounts a potential threat from a white female or a senior citizen because it is counter-stereotypical. But those implicit biases are also killing black men, said trainer Sandra Brown, a retired Palo Alto police public-affairs lieutenant.

Brown described a study by Stanford psychologist Jennifer Eberhardt in which Stanford students in a psych lab were shown a blurry object on a computer screen. The students were quicker to identify it correctly as a gun if they had been shown an image of a black face right beforehand. (Greenwald and Banaji also invoke this study.) “Black men are dying because we see the gun too quickly,” Brown said—never mind that the aforementioned research on police shootings shows that black men are not dying because police officers “see the gun too quickly.” Why might such a priming function occur? Eberhardt and her coauthors, of course, attributed it to irrational stereotype. But another explanation comes to mind: blacks are objectively more associated with crime. The Chesterfield training only tiptoed up to this topic.

It is “partially factual,” Brown said, that “people of color” are disproportionately involved in street crime. Actually, it is fully factual; street crime today is almost exclusively the province of “people of color.” In New York City, for example, blacks and Hispanics committed 98 percent of all shootings in 2016; whites, who, at 34 percent of the population, are the city’s largest racial group, committed less than 2 percent of all shootings. Those figures come from the victims of, and witnesses to, those shootings. Blacks, who are 23 percent of the population, committed 71 percent of New York’s gun violence—meaning that blacks in New York are 50 times more likely to commit a shooting than a white New Yorker. In Chicago, blacks and whites each make up a little less than a third of the city’s population: blacks commit 80 percent of all shootings; and whites, a little over 1 percent—making blacks in the Windy City 80 times more likely to commit a shooting than whites. These disparities are repeated in cities across the country. If you’re hit in a drive-by shooting, the odds are overwhelming that your assailant will be black or Hispanic—and that you will be, too, since blacks and Hispanics are usually the victims of such crimes. If the public associates blacks with violent street crime, it is facts that lead to that association.

Yes, a police action should not be based on a “stereotype,” as Brown rightly admonished. But crime is the overwhelming determinant of policing today, and to pretend that implicit bias drives policing distracts from the challenges that officers face. By day two, the audience was interjecting some social and political reality back into the training. “Are there any studies about black and white officer shootings?” asked a black officer. “No one’s outraged if I shoot a black, but if a white officer does, it will be pandemonium.” Another local officer said that he worried about the violence in the black community: “It’s so disproportionate. When black people are shot by other blacks, it doesn’t make the news. There were over a dozen people shot in a theater the other day. I worry about that disparity.”

Then an officer from Chesterfield raised the most pressing concern in the Black Lives Matter era: depolicing. Seventy-five percent of the apprehended shoplifters in the Chesterfield mall were black, he said. (Chesterfield’s black population was 2.6 percent in 2010.) “We struggle with depolicing; it’s difficult to tell officers to enforce the shoplifting laws when they will be confronted with the implicit bias issue.” That is the dilemma facing officers today: if they enforce the law, they will generate the racially disproportionate stop-and-arrest statistics that fuel specious implicit-bias charges. But it is the reality of crime, not bias, which results in those disproportions.

The trainers had nothing to offer to resolve this problem. “It’s hard to answer these tough questions,” Brown said. Her partner, Scott Wong, also from the Palo Alto police department, gamely tried to bring the discussion back to the official topic. “You need a passion for this; you have to believe in implicit bias and how it affects officers.” But while many officers could do with a courtesy tune-up, they are overwhelmingly not making bad decisions based on invidious stereotypes. What they are doing, on a daily basis, is trying to deal with the breakdown of family and bourgeois norms in inner-city areas that leads to so many young black men gang-banging in the streets. Joshua Correll has found that officers’ neurological threat response is more pronounced when confronting black suspects. Might that be because black males have made up 42 percent of all cop-killers over the last decade, though they are only 6 percent of the population? Or because the individuals involved in the daily drive-by shootings in American cities are overwhelmingly black? Until those realities of crime change, any allegedly “stereotypical” associations between blacks and crime in the public mind will remain justified and psychologically unavoidable. Those crime rates will also affect the pool of job candidates without a criminal record, further reducing the likelihood of proportional representation in the workplace.

“As long as the behavioral disparities remain so great, the minute distinctions of the IAT are a sideshow.”

The Chesterfield training did offer several profound pieces of advice: “Make every day the day you try to change someone’s perceptions” of the police, Brown said. She urged officers to get out of their cars and talk to civilians: “They need to know us; people are afraid to talk to us as human beings.” However sage this message, though, it should not be necessary to contract with a pricey implicit-bias trainer to convey it.

The implicit-bias crusade is agenda-driven social science. Banaji seems to see herself on a crusade. In an e-mail to New York’s Jesse Singal, she attacked both the credentials and the motives of the academics who have subjected the IAT narrative to critical scrutiny: “I don’t read commentaries from non-experts,” she wrote (those “non-experts” are overwhelmingly credentialed psychologists, like herself). “It scares people (fortunately, a negligible minority) that learning about our minds may lead people to change their behavior so that their behavior may be more in line with their ideals and aspirations.” The critics should explore with their “psychotherapists or church leaders” their alleged obsession with the race IAT, she suggested. Kang has accused critics of holding a “tournament of merit” vision of society and of having financial reasons for IAT skepticism. (Of course, the fact that Banaji and Kang hire themselves out as IB trainers, for “non-trivial . . . fees,” as Kang puts it about himself, and that Greenwald serves as a paid expert witness in discrimination lawsuits, does not lead Kang to impute financial reasons for such pro-IAT advocacy.)

A thought experiment is in order: if American blacks acted en masse like Asian-Americans for ten years in all things relevant to economic success—if they had similar rates of school attendance, paying attention in class, doing homework and studying for exams, staying away from crime, persisting in a job, and avoiding out-of-wedlock childbearing—and we still saw racial differences in income, professional status, and incarceration rates, then it would be well justified to seek an explanation in unconscious prejudice. But as long as the behavioral disparities remain so great, the minute distinctions of the IAT are a sideshow. America has an appalling history of racism and brutal subjugation, and we should always be vigilant against any recurrence of that history. But the most influential sectors of our economy today practice preferences in favor of blacks. The main obstacles to racial equality at present lie not in implicit bias but in culture and behavior.

Top Photo: Officer Edward Gillespie teaches a class about implicit bias at the Baltimore Police Training Academy. (RICKY CARIOTI/THE WASHINGTON POST/GETTY IMAGES)

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next