Nothing rattles the American Left so completely as the specter of a conservative, Bush-appointed Supreme Court. And no wonder. Over the last half-century, sympathetic judges have given the Left “progressive” policy outcomes that the voting booth wouldn’t deliver. It is this liberal judicial legacy—everything from affirmative action to partial-birth abortion—that the Left fears a Bush-influenced bench will sweep away. Haunted by the doomsday scenario of a Supreme Court dominated by Antonin Scalia and Clarence Thomas, the Democrats and their allies will fight with every means they can muster to block the appointment of conservative justices. If their ferocious and successful campaign against Bush’s recent circuit-court nominee, Charles Pickering, is any augury—and it is—those means (which in Pickering’s case included scandalously false accusations of racism) will be nasty.

What the liberals fear is a conservative judicial philosophy called “originalism,” which holds that judges must base their rulings on the Constitution’s text and structure, as the Framers understood it, and they must interpret statutes to mean what they say. Very different from the activist and creative jurisprudence that has prevailed for the last half-century, this approach, which was the Framers’ accepted view of judging, would never have permitted the Court’s expansive policymaking role that produced some of the Left’s most cherished victories. An originalist Court could even overturn some of those victories as unanchored in the Constitution.

Regardless of your view of the specific policies at issue, it is vital to America’s future that Bush win this battle for the courts: the Supreme Court’s politicized role in recent decades is corroding the self-government at the heart of American constitutionalism. In a democracy, voters, not unelected judges, decide the momentous questions. When the Supreme Court forces its policy preferences on the American people without the clear warrant of a constitutional text, as has happened often in the last 50 years, it is acting more as an “anti-democratic Caesar” than as the impartial referee it’s supposed to be, in Justice Scalia’s view. Moreover, by politicizing constitutional law, the Court has weakened the rule of law that is the bedrock of our constitutional form of government. As Justice Thomas notes, if law is just politics, “then there are no courts at all, only legislatures, and no Constitution or law at all, only opinion polls.” Why then would you need unelected judges to perform the same function as an elected congress?

It’s worth understanding how our courts got into this mess, so we can see how imperative it is to get them out.  The government by judiciary we now have is not what our Founding Fathers had in mind. The original originalists, they imagined that a life-tenured, independent judiciary would merely interpret the law as the people’s elected representatives made it—including the supreme law embodied in the Constitution. But they would have no right to create law. As Alexander Hamilton explained in The Federalist: “The courts must declare the sense of the law, and if they should be disposed to exercise WILL instead of JUDGMENT, the consequence would equally be the substitution of their pleasure to that of the legislative body.” For the Framers, that would be tyranny and should end in impeachment.

The Supreme Court took on the awesome powers it wields today with three big cases, at intervals of half a century. Very briefly, the first was the 1857 Dred Scott decision, concerning a Missouri slave, Scott, whose owner had taken him into parts of the Louisiana Territory where the federal government had banned slavery. Back home, Scott sued, saying that his stay in a free territory made him a free man, on the “once free, always free” principle that most Southern courts acknowledged. The Supreme Court, headed by Roger Taney and dominated by Southerners, ruled seven to two that Scott couldn’t be free, in part because the Constitution did not give Congress the right to bar slavery in the federal territories (or anywhere else). In other words, the painstakingly negotiated Missouri Compromise of 1820, in which Congress had admitted Missouri as a slave state but made slavery taboo in other parts of the Louisiana Territory—a political deal that preserved the Union for nearly four decades—was unconstitutional. The ruling helped ignite the Civil War.

What makes Dred Scott the prototype of today’s judicial activism is its radical rewriting of the Fifth Amendment’s due process clause, which states that no person shall be “deprived of life, liberty, or property, without due process of law”—meaning, according to ancient legal tradition, simply that the authorities had to follow the legally proper procedures in applying the law. In Dred Scott, the Court declared that any federal law that deprived a citizen of his slaves would in itself violate due process. This notion of “substantive” due process—that government can’t deprive citizens of certain property or certain liberties without violating due process by the very act of doing so—“has enabled judges to do more freewheeling lawmaking than any other,” says Scalia.

You’d think that after its war-fomenting foray into politics, the Court would have left legislating to legislators. But no: from the late 1890s until the mid-1930s, it again marshaled the substantive due process concept to make, rather than interpret, law. This time, the Court injected into the due process clause (not just of the Fifth Amendment but also of the post–Civil War Fourteenth Amendment, modeled on it, that applied to states) a natural right to “freedom of contract” dear to the nation’s rising business class. This “substance”—this liberty that could be taken away by no legitimate due process—was more morally defensible than slaveholding, but the interpretive sleight of hand to “discover” a protection that wasn’t in the Constitution was the same as in Dred Scott. The 1905 Lochner case symbolizes this period in constitutional history: it struck down, on the substantive due process grounds that it violated freedom of contract, a New York law that limited bakers’ workweeks to 60 hours for health reasons—only one of hundreds of federal and state social welfare laws, including early New Deal initiatives, that couldn’t get past the courts during these decades. “Like its even more unseemly ancestor Dred Scott,” observe legal thinkers Eugene Hickok and Gary McDowell, “Lochner helped set in motion the mechanics of government by judiciary.”

Just as Dred Scott helped precipitate a war, “Lochnerizing” the Constitution provoked a constitutional crisis. Frustrated by the Supreme Court’s thwarting of New Deal legislation, President Franklin D. Roosevelt threatened in 1937 to “pack” the Court with six additional judges who shared his politics. The justices averted the threat by mending their high-handed ways.

The modern era of judicial activism opened in 1954 with one of the most celebrated Supreme Court decisions ever: Brown v. Board of Education. In Brown, a unanimous Court, led by the new chief justice Earl Warren, ruled that state-mandated school segregation violated the Fourteenth Amendment’s injunction that no state may “deny to any person within its jurisdiction the equal protection of the laws.” Brown struck down a shameful injustice; but how the Court broke the color line had deeply troubling implications.

The Court faced two big constitutional obstacles to ending school segregation. First: precedent. The infamous 1896 Plessy v. Ferguson decision held that segregation in the provision of government services didn’t violate equal protection, as long as the separate facilities were equivalent. This “separate but equal” precedent wasn’t insurmountable, but the Court, following the legal tenet of stare decisis—that a decision in one case should apply in subsequent cases—doesn’t reverse itself lightly. But even 70 years of settled law could go out the window, if the Court could show that the original purpose of the equal protection clause barred school segregation.

Sadly, this approach seemingly hit a brick wall too. The aim of the Reconstruction Congress that adopted the Fourteenth Amendment in 1868, most legal scholars agreed, was to protect newly emancipated blacks from violations of their basic civil rights—such as access to courts or ability to buy and sell property—not to grant them political rights (the vote) or social rights (such as desegregated education). After all, the same legislators who devised the equal protection clause funded Washington’s segregated schools without qualms. Precedent and history—key pillars of jurisprudence—thus didn’t support the view that the Constitution demanded desegregation. Legal historian Alfred Kelly, who helped the NAACP lawyers prepare the Brown brief, later admitted, “I didn’t see a good argument that might be available to us.”

But the Warren Court wasn’t going to let these difficulties impede it from doing justice. “We cannot turn the clock back to 1868, when the [fourteenth] amendment was adopted, or even to 1896 when . . . Plessy was written,” Warren’s opinion for the Court asserted. Instead, the Court turned to contemporary social psychology that purported to show that segregation harmed the self-esteem of black schoolchildren and made it tougher for them to learn. Therefore, the Court said, separate wasn’t equal in education, regardless of what the Fourteenth Amendment’s framers intended or the Plessy Court believed.

However well-intentioned, this argument advanced no legal reason to reach its holding. “As a matter of principled constitutional law,” says Northwestern legal historian Stephen Presser, “the Brown opinion is almost certainly indefensible.”

Brown, acclaimed by the nation’s opinion leaders, became a powerful spur to future judicial lawmaking, this time to advance the values of the liberal elites, rather than the business elites (as in Lochner) and (again by contrast with Lochner) to force, rather than retard, social change. From Brown onward, the equal protection clause of the Fourteenth Amendment became a powerful engine of judicial might. In later decisions, the Court would take equal protection far beyond the intention of its framers to encompass new legal guarantees not just for blacks but for women, homosexuals, and other “disfavored” groups.

In addition, Brown began the shift in elite perception of what made for a good judge and a good judgment. By the time the sixties cultural revolution was in full swing, writes Harvard law professor Mary Ann Glendon, “judges began to be praised for qualities that once would have been considered problematic: compassion rather than impartiality, boldness rather than restraint, creativity rather than craftsmanship, and specific results regardless of the effect on the legal order as a whole.”

The heroic new judge drew inspiration from a doctrine called “the Living Constitution,” which held, as Justice William Brennan put it, that: “The genius of the Constitution rests not in any static meaning it might have had in a world that is dead and gone, but in the adaptability of its great principles to cope with current problems and current needs.” More than adapt, the Living Constitution could bring about epochal social changes whenever judges like Brennan believed that justice demanded them. Lawmakers began to put off contentious issues, looking to the Supremes to take them off their hands.

Amazingly, based on Brown—an anti-discrimination decision—the Court subsequently came up with a new rationale for government to discriminate by race. After all, Brown hadn’t overturned Plessy or endorsed Justice John Harlan’s famous dissent in Plessy that “[o]ur Constitution is color blind, and neither knows nor tolerates classes among citizens.” So even though the Warren Court, citing Brown, desegregated everything from city golf courses to public swimming pools, the decision had never ruled that equal protection forbade racial discrimination by government.

In the last of its desegregation decisions, Loving v. Virginia, the Warren Court established the current legal test for identifying racial discrimination that is lawful. Under the Living Constitution, race is a “suspect classification”; courts must give any laws or government actions that use race as a criterion “strict scrutiny” as to whether they are narrowly focused to serve only a compelling state interest and no other interest, and whether there are no colorblind alternatives available. This test, observes political scientist Richard Morgan, is an “intellectual disaster”—a permanent invitation to legislate from the bench, since its criteria are “essentially political judgments about wise public policy.”

These highly subjective criteria are objectionable enough, but—worse still—the Court used them to ride roughshod over Congress’s explicit intentions, expressed in the landmark 1964 Civil Rights Act. This exemplary legislation not only forbade segregated public schools but also made it illegal for government, or for any employer engaged in interstate commerce or receiving government contracts, “to discriminate against any individual . . . because of such individual’s race, color, religion, sex, or national origin.” The act also required government and private employers to make a special effort to hire qualified minorities and to give them training if necessary to get them up to speed—“affirmative action,” as it later came to be called. Such special efforts, the law clearly stated, did not mean preferential treatment, quotas, or reverse discrimination. Period.

But quotas and reverse discrimination are exactly what the Supreme Court brought about. In decisions from Griggs in 1971 to Bakke and Weber in the late seventies to Metro Broadcasting in 1990, the Court turned inside out the meaning and intent of the Civil Rights Act. The Court, rejecting a “literal interpretation” of the act’s words, ruled that the law actually didn’t prevent racial preferences in hiring and promoting blacks, that universities could use race in admissions, and that judges could even impose strict racial quotas in hiring and promotions on employers who had discriminated in the past against blacks. (Over time, an estimated 70 percent of the U.S. population, including women, the elderly, and various racial and ethnic groups, became eligible for court-approved preferential treatment.) Though since 1995 the Supreme Court has narrowed affirmative action, it still rules unpredictably on the question—depending, says political scientist Morgan, on “the vote of Sandra Day O’Connor, who alone (perhaps) understands what she means by the key terms.”

The Court tortured the Civil Rights Act with equal disregard for congressional intent in trying to speed up school desegregation. The act plainly stated that desegregation didn’t mean assigning students to schools by race, and busing them there, to promote integration. But in Green (1968) and Swann (1971), the high court, now under Chief Justice Warren Burger, gave the go-ahead to district courts to bus students to achieve racial balance. Whatever the worth of policies like busing and affirmative action, the post-Brown Court, in imposing them, despite the intent of Congress, on a citizenry that largely views them as unjust, was doing exactly what Hamilton said it couldn’t lawfully do under our Constitution: exercising will, not judgment.

The moral rightness of Brown’s result, however, has meant that critics of the Living Constitution doctrine constantly must deal with the charge that, if they object to Brown’s jurisprudence—its means rather than its end—they must be racists. What makes this charge doubly absurd is that the American people, not the judges, finally sent Jim Crow packing. As Wallace Mendelson and other historians point out, widespread segregation in the South continued after Brown, until Congress passed the 1964 Civil Rights Act and the 1965 Voting Rights Act and Elementary and Secondary Education Act. Thereafter, says Mendelson, “revolutionary changes followed”—sparked, as is appropriate, by the legislation of the people’s elected representatives, not the dictates of unelected judges.

Judicial originalists have no trouble supporting Brown’s outcome without embracing the Living Constitution dogma. In Judge Robert Bork’s view, whatever the Fourteenth Amendment’s framers might have thought about the compatibility of segregation with the explicit ideal of equality animating the amendment, it was painfully obvious by the 1950s that “separate” facilities for blacks always were inferior. Recent research by Michael W. McConnell, another leading originalist and a Bush appeals-court nominee, shows that, shortly after the Fourteenth Amendment’s ratification, a majority of those who voted for the measure in Congress believed that it was incompatible with state-sponsored segregation. Originalist Clarence Thomas, going further, maintains that each of the post–Civil War amendments—the Thirteenth (banning slavery), the Fourteenth, and the Fifteenth (extending the vote to blacks)—embodies the magnificent vision of the Declaration of Independence that “all men are created equal”: and so equal protection, interpreted against this backdrop, renders unconstitutional not only segregation but any laws that take race into account. Harlan was right, says Thomas: the Constitution is colorblind.

Other originalists disagree. Objects legal scholar Terry Eastland: “No majority of the Supreme Court has ever said, either explicitly or by implication, that the Constitution is colorblind.” And because Brown didn’t overrule Plessy, and went on to use that decision to turn the anti-discrimination 1964 Civil Rights Act into a warrant for government racial preferences, other originalists of Eastland’s stamp believe that those who seek a colorblind Constitution should work to pass a constitutional amendment blocking government from distinguishing by race. Argues political scientist Morgan: “This would finally complete the work of Reconstruction, align the text of the Constitution with our national ideals, and bury Jim Crow the way he should have been buried in the first place—by votes in legislative assemblies.”

If no one disagrees with the anti-segregation result of Brown, many disagree strongly with the anti-law-enforcement result of another of the Warren Court’s forays into legislating from the bench. Driven by the 1960s elite’s suspicion of police authority, the Court conjured out of the Living Constitution completely new procedural rights for the criminally accused that revolutionized the way the nation dealt with crime and helped fuel the crime explosion that began in the mid-1960s.

In Mapp v. Ohio (1961), the Warren Court rewrote the Constitution to force state courts to exclude from criminal cases any evidence that police obtained in an improper search, even if the cops’ error was inadvertent and tiny. This “exclusionary rule,” derived from the Fourth Amendment’s protection against unreasonable search and seizure, had never applied to states before 1961. The majority in Mapp pulled off this remarkable extension of the exclusionary rule’s scope by “incorporating” the Fourth Amendment guarantees, which apply to the federal government, into the rights protected by the due process clause of the Fourteenth Amendment, which applies to the states. This “nationalization” of the Bill of Rights, which began in the 1920s but principally occurred in the 1960s, “did more than anything else to make the Supreme Court the most powerful voice in the land in defining the rights of the American people,” opines law scholar Scott Douglas Gerber.

Mapp wasn’t the only new impediment the Court put in law enforcement’s way. In 1966, it handed down in a five to four ruling the famous Miranda decision. “You have the right to remain silent; anything you say can and will be held against you in a court of law; you have the right to an attorney, and if you cannot afford one, one will be appointed for you”—anybody who’s ever watched a cop show knows the warnings. Here the Court took the Fifth Amendment’s rule against self-incrimination that had applied only to courtroom testimony and—with breathtaking interpretive license—extended it to cover statements and confessions made during police interrogations.

Miranda brusquely threw out the centuries-old “voluntariness” test for confessions. Was a confession the result of an unconstrained choice? If yes, the test held, prosecutors could use it against the confessor; if not—if interrogators had coerced the confession—prosecutors couldn’t use it. Justice John Marshall Harlan (grandson of the Plessy dissenter) excoriated the Court for replacing this test, “an elaborate, sophisticated, and sensitive approach to admissibility of confessions,” with an inflexible, heavy-handed approach that, like Mapp, made it impossible to balance the rights of the accused against the competing values of finding the truth and protecting society.

Stung by two decades of public backlash against it for being more solicitous of criminals’ rights than of public safety—and by research showing that these decisions had reduced the conviction rate among criminal suspects—the Supreme Court retreated slightly from its unwavering suspicion of law enforcement in the 1984 Nix v. Williams decision. The Court said that prosecutors could legally introduce evidence that police had seized based on a bad warrant if the cops thought it valid in good faith. Even so, in Dickerson v. United States, the Rehnquist Court upheld Miranda in 2000 by a seven to two majority. The Court rested its case not on the Constitution’s text—how could it?—but on its own precedent and on the fact that Miranda “has become embedded in routine police practice to the point where the warnings have become part of our national culture,” as Rehnquist’s opinion for the Court explained. A dissenting Scalia, joined by Clarence Thomas, blasted the majority for maintaining the “power judging” that had foisted Miranda on police in the first place, in contravention of the Constitution’s plain meaning.

The Court has subjected the First Amendment to a stiff dose of “power judging” as well. It has used the amendment’s religion clause—“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof”—to erect a nearly impassable “wall of separation” between church and state, a wall that the Framers never envisioned. Washington, for example, thought religion “indispensable” to the “dispositions and habits which lead to political prosperity”—a view that seems to belong to a different universe from a 2000 Supreme Court ruling that a short, freely chosen, nonsectarian, and non-proselytizing prayer delivered by a student before a high school football game represented an unconstitutional establishment of religion.

The Court has also inverted the original meaning of the First Amendment’s free speech clause, which the Framers intended to protect political speech but not to give license to indecency and obscenity. As Supreme Court Justice Joseph Story, the early-nineteenth-century’s leading constitutional authority, put it: “That this amendment was intended to secure to every citizen an absolute right to speak, or write, or print whatever he may please, without any responsibility, public or private, therefore, is a supposition too wild to be indulged in by any rational man.” As late as 1942, a unanimous Court ruled that the First Amendment didn’t protect obscene or lewd words because “such utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”

But all that began to change with the Court’s 1971 Cohen v. California ruling, which threw out the disorderly-conduct conviction of a man who had refused to take off a jacket emblazoned: FUCK THE DRAFT. Opined Justice Harlan, writing for the Court: “[O]ne man’s vulgarity is another’s lyric.” Cohen and myriad decisions since have made it nearly impossible for communities to regulate any speech or image. Just this year, the Court struck down a law criminalizing “virtual” kiddie porn, including computer-created images of kids having sex.

The high court protects pornography, but it has curtailed the political speech essential to democratic debate that it was the original purpose of the First Amendment to protect. In Buckley v. Valeo (1976) and Nixon v. Shrink Missouri Government PAC (2000), the Court upheld legislation that, seeking to stamp out corruption or even the appearance of corruption, strictly limited political contributions. But telling someone he can’t spend his money promoting his political views is a pretty clear-cut infringement of his political speech, exactly what the First Amendment aimed to protect.

Questions of sex have tempted the Court to bend and twist the Constitution almost as energetically as questions of race. Roe v. Wade (1973) is the most famous case in point. Whatever your views about abortion, it is hard to deny that, as jurisprudence, Roe is embarrassingly shoddy. In a 51-page majority opinion by Justice Harry Blackmun that lacked any discernible legal reasoning, the Court based itself on the “privacy” right of married couples to use contraceptives that it had found in the “penumbras, formed by emanations” of the Bill of Rights in the 1965 Griswold v. Connecticut case. It asserted that this new guarantee of privacy, conjured up like a will-o’-the-wisp rising out of swamp gas, included an absolute right, protected by the due process clause of the Fourteenth Amendment, for all women to terminate pregnancy up until the third trimester—a penumbra formed by emanations indeed.

Then-associate justice Rehnquist’s dissent exposed Roe’s constitutional illegitimacy. When Congress adopted the Fourteenth Amendment, he noted, at least 36 state or territorial laws curbed abortion, and no one questioned their constitutional validity at the time. Many of those laws were still on the books in 1973. “The only conclusion possible from this history is that the drafters did not intend to have the Fourteenth Amendment withdraw from the States the power to legislate with respect to this matter,” said Rehnquist. Given that history, as well as the Court majority’s comical arguments, Roe was, as Justice Byron White’s dissent put it, nothing more than “an exercise of raw judicial power.”

In the 1992 Casey decision, the Court reached the furthest limit of judicial invention ever—or at least, so far. In Casey, Justices Anthony Kennedy, Sandra Day O’Connor, and David Souter co-authored an opinion that defended a woman’s right to end her pregnancy by rooting it in a new concept of liberty. “At the heart of liberty [as protected by the due process clause],” the justices wrote, “is the right to define one’s own concept of existence, of meaning, and of the mystery of human life.” Not a word in the Constitution acknowledges such a right, whether to allow abortion or anything else. What could it possibly mean? If I’m a Muslim, and my “concept of meaning” allows me multiple wives, do I have a constitutional right to have the state recognize my marriages as legal? If so, laws against polygamy must be unconstitutional. In a nation that values individualism and the pursuit of happiness, it’s hard to imagine any law that wouldn’t stand in the way of somebody’s “concept of meaning.”

Casey, legal scholars think, led to the 1996 Romer v. Evans decision, in which the Court struck down Colorado’s democratically crafted constitutional provision that homosexuals and bisexuals shouldn’t get special rights beyond those granted to the rest of the citizenry. Such a provision, the Court argued, could only be based on irrational “animus” against homosexuals. Scalia’s withering dissent spells out just how arrogantly antidemocratic and illegitimate the Court’s reasoning was. “Since the Constitution of the United States says nothing about this subject [homosexual rights],” he argued, “it is left to be resolved by normal democratic means, including the democratic adoption of provisions in state constitutions. This Court has no business imposing upon all Americans the resolution favored by the elite class from which the Members of this institution are selected, pronouncing that ‘animosity’ toward homosexuality is evil.” Romer also sparked a heated symposium in the respected, highbrow religious magazine First Things, in which several constitutional scholars, including Bork and Princeton legal philosopher Robert George, argued that the imperial judiciary that had delivered Casey and Romer had left religiously orthodox Americans and moral conservatives little recourse but civil disobedience to protect their values.

Affirmative action and busing, new rights for the criminally accused, a First Amendment that protects virtual kiddie porn but not a non-denominational prayer before a school football game—these are just some of the major ways the federal judiciary has helped remake America since Brown. The Supreme Court has created due process rights for public school pupils facing disciplinary proceedings and for youths in the juvenile justice system; it has struck down as violations of equal protection state legislatures designed—like the Congress—with one population-based legislative chamber and the other based on geographic boundaries; it has churned out a torrent of employment regulations—everything from rendering aptitude tests unconstitutional if they have a “disparate impact” on minorities to making it harder to fire people; it  has overturned a state law banning partial-birth abortions, which even many abortion supporters consider infanticide; it has, most recently, banned the death penalty (which the Bill of Rights explicitly allows) for the mentally retarded. Cumulatively, these decisions have removed many of the most important moral and social issues from the political arena, shrinking the self-government that America was founded to guarantee. “Day by day, case by case,” laments Scalia, the judges are “busy designing a Constitution for a country I don’t recognize.”

Since these decisions have almost invariably pushed forward the Left’s political agenda, it’s unsurprising that an entire industry has sprung up in the left-leaning academy to justify and advance the Living Constitution idea. The University of Chicago’s Cass Sunstein, for instance, argues that the Constitution directs the judiciary to function as democracy’s referee, deciding which choices of the people are what they really intend, and which are in some sense accidental and therefore non-binding. Harvard’s Lawrence Tribe, author of a popular constitutional-law textbook, argues, like Justice Brennan, that the Constitution’s grand principles must evolve with time to meet the changing needs of society. Judges supply the correct interpretation of those principles, not based on what the Framers had in mind, but in accordance with the views of enlightened Americans—which means the left wing of the Democratic party. The Critical Legal Studies movement and its offshoots believe that jurisprudence just expresses race or gender or economic power relations, rather than an effort to do objective, disinterested justice, so creative interpretations of the Constitution that help improve the lot of the disenfranchised are the turnabout that is fair play.

Against such thinkers stands a much smaller band of originalists defending what Scalia has memorably called the “Enduring Constitution.” Originalists argue that America adopted a written Constitution precisely because it doesn’t change over time. “Otherwise,” Thomas points out, “we would have adopted the British approach of an unwritten, evolving constitution.” Second, originalists believe that high-court judges must base their decisions on the Constitution’s text and structure, as originally understood. They also hold that legal and constitutional texts have a limited range of meaning that judges can gloss rightly or wrongly; truth is not an obsolete concept in law. Finally, originalists think that judges can be impartial interpreters of the law. “In order to be a judge,” Justice Thomas has written, “a person must attempt to exorcise himself or herself of the passions, thought, and emotions that fill any frail human being. He must become almost pure, in the way that fire purifies metal, before he can decide a case.” Otherwise, he’s a politician. Though a minority view on campus, originalism has made some inroads even there, especially with the success of Princeton political scientist Keith Whittington’s book, Constitutional Interpretation.

Still, originalism is the only jurisprudence fully compatible with our form of government. Even if one did buy the odd, recent idea that jurists should be able to ignore or deconstruct the text and original meaning of the Constitution, Scalia asks, why, in our democracy, should judges be the people uniquely entitled to figure out society’s needs? “It is simply not compatible with democratic theory that laws mean whatever they ought to mean, and that unelected judges decide what that is,” he declares.

Cass Sunstein has come up with a theory that advocates institutionalizing these differences of opinion on the Supreme Court itself. In naming and confirming justices, argues Sunstein, who had advised the Democrats on these matters, the president and the Senate should ensure judicial pluralism, with appropriate representation of each approach. New York Democratic senator Chuck Schumer has taken Sunstein’s argument a step further, now that a popular conservative president, who looks to Scalia and Thomas as his model judges, may get to appoint two or more Supremes, thus changing the tenor of the Court and perhaps ultimately undoing some of the Left’s most cherished social gains. Let’s just make the confirmation of judges a raw political fight, says Schumer, since the two parties have distinct and incompatible jurisprudential ideologies. Judging equals politics, plain and simple, on this view. With gladiatorial Schumerism on the scene, worries legal historian Stephen Presser, the threat to an independent judiciary “is probably more real now than it has been for three-quarters of a century”—since, that is, FDR’s court-packing scheme.

Even from a narrowly partisan standpoint, though, the Left would be wise to think hard about whether it makes sense to reject originalism and treat the judiciary as a political war machine. After all, one can imagine a Court made up of real conservative activists who’d go beyond the Constitution to dismantle the welfare state completely, say, or to ban abortion (as opposed to letting states decide if they want to make it legal or to regulate it, as would happen if the Court simply overturned Roe). True, such an outcome is unlikely, since most conservative jurists are reflexively originalist in their jurisprudence. But the fact that it could happen should make liberals consider whether they might not be better off living under the Constitution our Framers gave us, instead of one subject to constant political alteration. Originalism ultimately favors neither Left nor Right, but self-government. President Ronald Reagan, swearing in Scalia in 1986, put it beautifully: the Founding Fathers, he said, “knew that the courts, like the Constitution itself, must not be liberal or conservative. The question was and is, will we have government by the people?” That is still the question.

Photo by Anna Moneymaker/Getty Images


City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next