Few federal statutes these days receive as much attention as Section 230 of the Communications Decency Act of 1996. That provision, which has become a stand-in for the broader Big Tech debates roiling our politics, provides “interactive computer services” with legal immunity for their moderation of hosted third-party speech. While politicians ask whether tech firms really deserve such immunity, legal theorists are debating just how extensive it should be under a proper construction of the law—a debate that has broader implications for statutory interpretation.

The most controversial portion of Section 230 is the so-called Good Samaritan provision in subsection (c)(2)(A), which stipulates: “No provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Taken literally, the language is all-encompassing. Columbia law professor Philip Hamburger wrote in January that the Big Tech giants treat this provision “as a license to censor with impunity,” restricting any material to which they object.

But that reading of the statute has come under scrutiny. Last October, Supreme Court Justice Clarence Thomas questioned “whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms” and accused previous courts of “[a]dopting the too-common practice of reading extra immunity into statutes where it does not belong.” And last month, UCLA law professor Eugene Volokh flagged on his popular legal blog his own evolving thoughts on the proper interpretation of Section 230’s subsection (c)(2)(A). Volokh and coauthor Adam Candeub, a law professor at Michigan State University, explained that they had concluded that the Good Samaritan provision’s “otherwise objectionable” language does not confer blanket immunity to tech platforms for their content-moderation decisions:

Section 230(c)(2) was enacted as sec. 509 of the Communications Decency Act of 1996 (CDA), and all the terms before “otherwise objectionable”—“obscene, lewd, lascivious, filthy, excessively violent, harassing”—refer to speech that had been regulated by the rest of the CDA, and indeed that had historically been seen by Congress as particularly regulable when distributed via electronic communications. Applying the ejusdem generis canon, “otherwise objectionable” should be read as limited to material that is likewise covered by the CDA. . . .

“[O]bscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” in § 230(c)(2), properly read, doesn’t just mean “objectionable.” Rather, it refers to material that Congress itself found objectionable in the Communications Decency Act of 1996, within which § 230(c)(2) resided. And whatever that might include, it doesn’t include material that is objectionable on “the basis of its political or religious content.”

The upshot is that Volokh and Candeub would construe the provision narrowly, potentially leaving tech firms liable for politically or religiously charged content-moderation decisions.

This analysis has important ramifications. They cite the arcane ejusdem generis canon of statutory construction, which the Supreme Court has defined as a situation where “general words follow specific words in a statutory enumeration,” and “the general words are construed to embrace only objects similar in nature to those objects enumerated by the preceding specific word.” In the context of Section 230’s Good Samaritan provision, that cuts against an expansive interpretation that covers any content whatsoever that a Big Tech platform views as objectionable.

But they also reject a literalist interpretation of subsection (c)(2)(A) for one that is more consonant with the ultimate end of the Communications Decency Act: combating online pornography. Thus they interpret an ambiguous provision through the expressly articulated purpose of the statute: helping the Internet achieve “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”

This resembles an approach to constitutional and statutory interpretation that prefers to ground its legal interpretations, at least in part, in purposivism, rather than in a literalist textualism that sometimes ignores the normative intentions of the law. As Big Tech critic Rachel Bovard has written, Section 230 “was enacted nearly 25 years ago as something akin to an exchange: Internet platforms would receive a liability shield so they could voluntarily screen out harmful content accessible to children, and in return they would provide a forum for ‘true diversity of political discourse’ and ‘myriad avenues for intellectual activity.’” This overarching telos is precisely the sort of thing that can help guide subsequent expositors—just as it seems to help guide Volokh and Candeub, if only implicitly. Applied to Section 230, an interpretive approach that considers the moral purpose behind a law could leave Big Tech liable for large swaths of discretionary content moderation.

Photo by Lukas Schulze/Getty Images

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next