American companies’ embrace of radical ideas appears both sudden and inexplicable. In internal trainings, companies from Disney to Lockheed-Martin ask their employees to “challenge colorblind ideologies” and “deconstruct their white male privilege.” Firms spend vast sums of money on such trainings, on diversity-related speakers, and on maintaining a progressive image. Employees find themselves wondering why their workplace has transformed into a progressive propaganda center.

Much has been made of the propagation of certain ideas—call them social-justice ideology, critical race theory, or wokeness—in American institutions. But surprisingly little attention has been paid to the question of why institutions “go woke.” Some address this question in philosophical or ideological terms, noting the continuity between these ideas and the work of certain twentieth-century intellectuals, from the Frankfurt School in the 1920s to the critical race theorists of the 1980s. But a historical and sociological analysis can help explain why institutions accepted these ideas as legitimate in the first place.

Race-conscious policy has become a self-sustaining force in the modern workplace. Where companies once adhered to a paradigm of “compliance,” in which they adopted affirmative action to satisfy a new legal regime, they now follow the rule of “diversity,” which encourages race-consciousness as an end in itself. If social-justice ideology is the food that universities and businesses give to their students and employees, diversity is now the air that they breathe, the set of taken-for-granted assumptions without which woke ideas would be ignored. The story of diversity’s institutionalization in American corporate culture tells us much about how we got to this current moment—and how we can move beyond it.

The story starts in the civil rights era—not with marches, sit-ins, and the broader social movement, but with the sprawling bureaucracy that this movement produced. Lyndon B. Johnson’s landmark Civil Rights Act of 1964 dramatically expanded the responsibilities of the executive and judicial branches, compelling regulators to intervene in education, housing, and welfare. It also created new regulatory entities, such as the Equal Employment Opportunity Commission (EEOC), to carry out its mandates. “Civil rights ideology,” writes journalist Christopher Caldwell, “especially when it hardened into a body of legislation, became, most unexpectedly, the model for an entirely new system of constantly churning political reform.”

The new regime, however well intentioned, came with a complexity that private enterprise struggled to understand. As sociologists Frank Dobbin and John Sutton have argued, the machinery of civil rights suffered from many of the problems that plague the rest of the U.S. regulatory state. It was ambiguous, in that it broadly prohibited discrimination without clearly articulating what that meant; it was continuously expanding, so corporations constantly had to update their awareness of relevant rules; and it was fragmentary, in that it was carried out by redundant and often conflicting agencies at different levels of government. Such terms as “affirmative action” and “discrimination” were rarely defined, their meanings always in flux.

Still, enforcement took off in the 1970s as both political parties embraced the new apparatus. The doctrine of disparate impact, enshrined by the Supreme Court’s unanimous 1971 decision in Griggs v. Duke Power Co., lowered the burden of proof for bias. Richard Nixon set up “goals and timetables” for corporate affirmative-action commitments and established hiring quotas for federal contractors, Gerald Ford promulgated regulations mandating bilingual education, and Jimmy Carter consolidated enforcement power under the Office of Federal Contract Compliance Programs (OFCCP), while welcoming “hundreds of complaints” against employers for various violations. Under all three presidents, the EEOC aggressively targeted some of America’s biggest employers, including AT&T, General Electric, and Ford.

Facing the dual challenges of inscrutable regulation and aggressive regulators, businesses responded by complying with the new mandates for race-consciousness under the Civil Rights Act’s Title VII. They implemented race-conscious policies to avoid the ire of regulators and the risk of lawsuits or federal investigations. Diversity consultants Rohini Anand and Mary-Frances Winters note that corporate trainings were primarily legalistic affairs, “a litany of dos and don’ts and maybe a couple of case studies for the participants to ponder.” As civil rights regulations grew, so did corporations’ tools for complying. In 1970, Dobbin observes, fewer than 20 percent of firms had written equal-employment or affirmative-action rules; by 1980, after a decade of heavy-handed enforcement, nearly half had done so.

Yet large majorities of the country opposed affirmative action, which became a chief target of Ronald Reagan. On the campaign trail in 1980, he identified affirmative action as an example of government overreach and micromanagement, promising to roll back “bureaucratic regulations which rely on quotas, ratios, and numerical requirements.” Once Reagan took office, an entrenched bureaucracy, a hostile Congress, and courts frequently thwarted his ambitious agenda. He did, however, ease compliance pressures for a time. William Bradford Reynolds, Reagan’s appointee to run the Justice Department’s Civil Rights Division, publicly opposed the use of quotas and endorsed colorblindness in hiring. Then–EEOC chairman Clarence Thomas shifted the focus from big systemic cases to individual instances of discrimination and tried to minimize the use of “goals and timetables.” The OFCCP downshifted, and both agencies saw their staffing and funding levels slashed.

If corporations were still complying with affirmative action merely because of legal pressures, this regulatory relaxation should have induced them to reverse course. But, astonishingly, they strengthened their commitment to affirmative action, even filing amicus briefs and sending telegrams to Reagan in support of it. A 1985 survey of Fortune 500 companies that Dobbin cites found that 95 percent would “continue to use numerical objectives to track the progress of women and minorities,” even if such objectives were no longer imposed. A 1986 survey found that despite reduced enforcement, 90 percent of firms planned to keep their affirmative-action programs unchanged, while the other 10 percent planned to expand them. Once a product of compliance with government regulation, race-conscious corporate policy had taken on a life of its own.

“The doctrine of disparate impact, enshrined in a 1971 Supreme Court decision, lowered the burden of proof for bias.”

Corporations were not going to give up race-conscious policy just because Reagan told them to, but they needed a rationale for continuing to pursue it. The Reagan administration unwittingly handed them one with Workforce 2000, a 1987 report commissioned by the Department of Labor and authored by two fellows at the Hudson Institute that unexpectedly became a bestseller. The report’s blockbuster finding: by the end of the millennium, only 15 percent of those entering the workforce would be white men, while the large remainder would be women and minorities. This pending demographic tidal wave gave business leaders a new reason to care about race- and sex-conscious policy—namely, the need to create a workplace that could cater to a wide variety of workers.

Thus, the diversity paradigm was born. R. Roosevelt Thomas, the godfather of workplace diversity management, argued that affirmative action had been designed to help women and minorities enter a monolithically white and male labor market. But by 1990, women, immigrants, and minorities were part of the workforce and, Workforce 2000 argued, would soon be the overwhelming majority of new entrants. “Women and minorities no longer need a boarding pass,” wrote Thomas. “They need an upgrade.” Building a corporate culture that helped them achieve their full potential meant “managing diversity”—not only ensuring that the new, heterogeneous workforce was as productive as the old homogeneous one, but extracting the added value that this heterogeneity could bring. As one academic puts it, diversity management is “aimed at maximizing every individual’s potential to contribute towards the realization of the organization’s goals through capitalizing on individual talents and differences within a diverse workforce environment.”

The shift from compliance to diversity constituted a regime change. Compliance was naturally limited: businesses followed the rules to avoid legal and cultural strife. But diversity was potentially unlimited: one could always do more to make the corporate culture more welcoming to members of various identity groups. Unlike compliance, diversity had a business case: it was said to be the key to survival in the twenty-first-century economy. And diversity came to dominate discussions of race-consciousness in the workplace. One analysis of professional management journals found that discussion of diversity-related concepts was more or less nonexistent prior to 1987, but after Workforce 2000 appeared, it took off—a trend that has persisted.

While the business case for diversity still commands the support of corporate leaders, it is empirically lacking. Workforce 2000’s most sweeping conclusion—that by the turn of the century, white men would make up just 15 percent of new entrants to the workforce—was wrong. In reality, those entering the workforce by 2000 would look much like those in 1987. Even today, the workforce remains majority white, particularly at the top end of the skill distribution, where the economy is growing fastest. The demographic tidal wave never came.

Meantime, research generally doesn’t support the notion that diversity is good for the bottom line. Meta-analyses of the literature on measures of cultural or demographic diversity among employees routinely yield mixed findings, reporting no effects or even small negative effects of such measures as racial and sexual diversity on team performance. Cultural diversity—that is, the immigrant share of the workplace—can sometimes have small positive effects on innovation, but those effects are both modest and contingent on other factors. Evidence from studies that randomly assign students to new businesses is similarly equivocal. One study based on Dutch students found a U-shaped relationship between team diversity and business performance, suggesting that there can be both too little and too much diversity. Another, based on MBA students at Harvard, found that greater gender and race diversity resulted in worse performance by randomly assigned teams.

“Research generally doesn’t support the notion that diversity is good for the bottom line.”

Perhaps diversity improves business performance in some situations, but it clearly doesn’t do so all the time. That’s unsurprising: diversity introduces new perspectives, which can spur innovation, but it can also make teamwork more challenging. Diversity is far from the “killer app” that its proponents make it out to be.

If the findings on diversity and productivity are ambiguous, the evidence for whether diversity trainings or similar programs succeed is unambiguously negative. Research on online diversity trainings finds limited durable behavioral change. A 2016 report from the EEOC found that harassment training had little effect on workplace behavior. An analysis of some 700 companies found that diversity trainings and diversity evaluations for management can actually result in fewer minorities represented among management. And at least some evidence exists that adopting diversity goals and teaching about stereotypes can actually serve to reinforce stereotypes.

In short, race-conscious policy is not fundamental to business success. Yet every year, businesses spend billions on diversity initiatives, which they believe are both urgently necessary and good for their bottom line. Why did they adopt a belief based on faulty evidence—and why do they persist in that belief?

Compliance had a rational basis: engage in (or appear to engage in) affirmative action to avoid legal trouble. Diversity’s rational basis—to compete in an ever more diverse economy—appears to be a mirage. Instead, diversity seems to have survived because corporations have been unwilling to drop an entrenched practice even after justification for its original purpose receded. Diversity was a norm that had become, by the 1990s, “institutionalized.”

In a classic 1977 paper, sociologists John Meyer and Brian Rowan argue that an organization’s behavior is structured not only by the pursuit of efficiency or productivity but also by the social context in which that organization exists. That social context—including everything from public opinion and the law to the “views of important constituents” and “knowledge legitimated through the education system”—often results in the adoption of practices that determine an organization’s structure. These practices are not necessarily adopted because they are optimal for profit-making, though, but because they are simply what is done. As Meyer and Rowan remind us, corporations don’t always do things to enhance the bottom line; they also create and participate in “ceremonies” and “myths.” And failure to do so can endanger their social position.

History suggests that by the time Ronald Reagan came to power and began to ratchet back affirmative-action enforcement, race-conscious policy had become institutionalized in the sense that followers of Meyer and Rowan (often called the “neo-institutional” school of sociology) mean. Diversity emerged not because it served the bottom line but because race-consciousness was simply what was done: it was an end in itself, after which the business case was always a post-hoc justification.

As conditions have changed since the 1990s, workplace diversity has evolved. Where diversity management was once the watchword, human-resources officers now talk about “diversity, equity, and inclusion,” while still insisting that race-consciousness is an essential principle of business success. Most firms have followed the trend to appease their employees. As liberal views and employability (as measured by education) have become closely correlated, firms compete for skilled workers by adopting woke corporate cultures, making wokeness a form of nonpecuniary benefit. In the process, the demands of diversity become increasingly excessive. Corporate executives routinely spend their weekends “dismantling whiteness” in elaborate performances.

The transition from compliance to diversity converted race-consciousness from a legal mandate to a feature of doing business and thereby legitimated the pursuit of race-focused political agendas in the workplace. When employees demand, or employers proactively grant, that the company should tweet about Black Lives Matter or hold segregated training events, it is partly because they have adopted the superficial logic of diversity, agreeing that race-conscious policy is fundamental to profitability.

The institutionalization of diversity also explains why worker and activist demands have grown more extreme, from the awkward multiculturalism of the 1990s office to the outright struggle sessions of today. Compliance had limits: though firms competed to prove their level of compliance, a fundamental relationship existed between what regulators said and what companies did. The transition from compliance to diversity marks the moment at which race-conscious corporate policy became unmoored from rational purpose and mutated into a myth. Unlike compliance, diversity is unlimited—unbounded by a rational goal and pursued for its own sake. Organizations have no capacity to reject new diversity initiatives; more diversity is always better. In fact, because diversity is institutionalized, questioning its efficacy at achieving its stated goals gets you nowhere (except fired). Radical ideas leak from the academy into the corporate world as diversity easily absorbs ever more elaborate definitions of race-conscious policy.

With large portions of the country opposing affirmative action, President Ronald Reagan and Equal Employment Opportunity Commission chairman Clarence Thomas shifted the federal government’s focus from systemic cases to individual instances of discrimination.
With large portions of the country opposing affirmative action, President Ronald Reagan and Equal Employment Opportunity Commission chairman Clarence Thomas shifted the federal government’s focus from systemic cases to individual instances of discrimination. (BETTMANN/GETTY IMAGES)

In a recent essay, political scientist Richard Hanania argued that wokeness is a direct product of the Civil Rights Act of 1964 and its outgrowths. An anti-woke agenda, Hanania reasons, must therefore strike at the root of the post–civil rights regime, tearing up the laws and rules that perpetuate it.

That argument is incomplete. Yes, contemporary racial progressivism is linked genealogically to the post–civil rights dispensation. But even if the post–civil rights regime of aggressive federal oversight and disparate-impact enforcement were undone tomorrow, wokeness would persist—just as it did when Reagan sought to draw back the regulator’s reach slightly, and just as it did under the Trump administration.

Legal solutions do exist to check the excesses of institutionalized diversity. As Hanania notes, corporate decision-makers can’t ignore the cudgel of state power: compliance concerns may not be dominant any longer, but organizations must still follow the law. Because American civil rights law remains ambiguous, broad, and unevenly enforced, it is exploited by a class of personnel experts with a vested interest in perpetuating extreme forms of racial progressivism. Overbroad lawmaking begets arbitrary and capricious administration, inducing firms to go further and further in fear of the long arm of the law. Clarifying what compliance with civil rights laws really means, then, could help firms that want to follow the rules without succumbing to insanity. Such regulatory reforms are almost certainly more desirable—and more electorally palatable—than wholesale assaults on the civil rights state, including its many benefits.

Undoing the corporate diversity regime, which doesn’t generate profits and permits harmful radicalism to seep into the workplace, should be the ultimate goal. One plausible approach is to heighten the contradictions between colorblindness, which still commands the support of most Americans and nominally remains the law of the land, and the color-consciousness that has come to dominate workplaces. Racial discrimination remains illegal under Title VII; while the battle to enforce its actual language will be difficult, successful legal action would endanger the diversity paradigm. Making clear to firms that, if they are too woke, they will face lawsuits from disgruntled employees may prove sufficient incentive to curb the worst excesses. An overly woke workplace can count as a “hostile environment,” too, particularly in the hands of a skilled litigator and a judiciary that has grown more conservative over the past half-decade.

Regulators could facilitate this process. An elite consensus in favor of affirmative action stymied Reagan’s reform efforts. But contemporary wokeness, by contrast, seems comparatively more polarizing, especially among elected Republicans, who see it as an inviting political target. Hanania rightly emphasizes that the Trump administration could have been far more aggressive on this front, and future administrations should consider the Reagan agenda—both its strengths in theory and the barriers to its implementation in practice.

All this is simply a sketch of what could be done. To move forward, however, it is important to understand how we got here.



City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next