When a law with a missing Oxford comma cost one company $5 million, it can truly be said that the devil is in the details. Court cases not only can be lost based on a comma, though; they can also be lost based on a definition.

Consider recent cases in Arkansas and Ohio, where courts struck down state age-verification laws over deficiencies in their definitions of “social media.” Legislators in both parties may want to hold Big Tech accountable, but their reforms will go for naught unless they sweat the details of their definition.

Some have likened Big Tech to Big Tobacco, but social media, unlike cigarettes, is a forum for speech. Policymakers will therefore need to account for the First Amendment in writing a legal definition of social media. Supreme Court precedent requires legislatures to consider three factors in crafting such definitions: accuracy, content-neutrality, and precision (avoiding vagueness).

First, to survive courts’ scrutiny, a law’s definition of social media must be accurate. While such a definition need not be perfect, the more imprecise it is, the greater its exposure to legal risk.

An accurate definition not only should include social media sites; it also should exclude sites that are not social media. For example, if Yelp—a restaurant- and business-review site—is not a social media site, the definition should not include Yelp.

Additionally, a definition’s accuracy not only should be evaluated against the Big Tech platforms; it also should be evaluated against sites such as Yelp or, say, Reddit and Etsy. If policymakers are looking for a list of other examples—some of which are social media sites, and some of which are not—a good starting point is the list of members for the trade association Internet Works.

Consider a lawmaker who drafts a definition of social media that includes Yelp—though that was not his intention. When Yelp points that out, the lawmaker’s first inclination may be to add an exception for business-review sites. That would improve the accuracy, but it could also make the definition content-based, not content-neutral.

The Supreme Court subjects “content-based” laws to strict scrutiny, but “content-neutral” laws to intermediate scrutiny. The way to draft a content-neutral law, per the Court’s 1994 ruling in Turner Broadcasting Systems v. FCC, is to avoid language that “distinguish[es] favored speech from disfavored speech on the basis of the ideas or views expressed.”

When determining if a law is content-neutral, courts will look at both what the regulation is, and who it applies to. For example, if a law contains a content-neutral regulation, but it curiously applies that regulation just to Elon Musk’s X, the courts will likely rule that the law is content-based. The legal definition of social media determines who the law applies to; that’s why the details of that definition matter.

When Arkansas and Ohio passed their age-verification laws, for example, the courts ruled that their definitions of social media were content-based, which subjected both laws to strict scrutiny. Strict scrutiny is often described as “strict in theory, fatal in fact”; as a case in point, the courts blocked both laws.

Here, it helps to evaluate a definition both with and without its exceptions. In Ohio, the judge said that even without the exceptions, the state’s definition of social media was content-based, though it was “a close call.” He did not, though, consider the state’s long list of exceptions to be a similarly close call: “The exceptions to the Act for product review websites and ‘widely recognized’ media outlets, however, are easy to categorize as content based.” In a real-word example that closely mirrors our Yelp hypothetical, the judge added, “For example, a product review website is excepted, but a book or film review website, is presumably not.”

Exceptions, by their nature, create the perception that some group is receiving special treatment. In the social-media context, such exceptions suggest that a law favors certain forms of content over others. They also give a law’s opponents more opportunities to strike it down—when a law has 13 exceptions, lawyers get 13 bites at the apple to prove that one of those exceptions is content-based.

So instead of adding an exception, perhaps the legislator in our Yelp example could modify his definition of social media such that it includes only online platforms with “features that are harmful to minors.” But then who gets to be judge of whether a feature is harmful? That raises a different legal issue: vagueness, which clashes with the third key factor—precision.

Any student or employee accused of violating a vaguely written code of conduct will inherently understand the dangers of such policies. If a campus policy uses a nebulous term like “hate speech,” for example, and a woke campus bureaucrat believes that conservatives are “hateful,” the effective definition of “hate speech” is any speech that the bureaucrat hates.

When laws are similarly imprecise, courts can use the “void for vagueness” doctrine to strike them down. Laws, after all, should provide people with fair notice of what the law does and does not allow. When laws are vague, people are no longer ruled by the law; they are instead ruled by the government bureaucrats who enforce the law’s provisions as they see them.

And while vagueness is more of a due-process concern than a free-speech concern, courts will more strictly enforce the “void for vagueness” doctrine when free speech is involved. As the Supreme Court said in 2012’s FCC v. Fox Television Stations, “When speech is involved, rigorous adherence to those requirements is necessary to ensure that ambiguity does not chill protected speech.”

When Arkansas was sued over its age-verification law, for instance, the state ran into an even more fundamental problem: its own witnesses could not agree on whether the definition of “social media platform” included Snapchat. The court, predictably, ruled that the law was too vague.

Let’s revisit the Yelp hypothetical. If adding an exception or using vague language is a bad idea, what can a legislator do to ensure his social-media definition includes those platforms he wants to target and excludes those he doesn’t?

He could distinguish between platforms where content is primarily user-generated and those where user-generated content is built around other types of content: comments on a news article, say, or reviews for a product (or business or film).

He also could ask a tech company that objects to his definition of social media three questions. First, why should the company in question be excluded? In some cases, the answer is straightforward, but in others, it is not so clear-cut. Second, how would the company change the definition, without adding an exception? And third, are the firm’s proposed changes content-neutral and not vague?

Before ever setting pen to paper, legislators must analyze the actual problem social media poses. We know that social media harms kids, but that insight alone doesn’t help us write a definition. Policymakers must ask why it is harmful: what are these platforms’ general characteristics, and what problems do they present? If legislators can more clearly define the problem, they can also write a clearer definition.

Once legislators can explain why social media harms kids, they should commit that explanation to paper in the form of legislative findings. Normally, findings are written to persuade policymakers and outside groups to support a bill. These findings must be written differently, however, because they have a different audience: the courts. In 1994’s Turning Broadcasting Systems v. FCC, for example, the Supreme Court ruled that the FCC’s must-carry rules—which forced cable companies to carry local channels—were content-neutral, and the “unusually detailed statutory findings” played a role in the Court’s decision.

As an example, a legislator proposing a social-media bill should include this finding: “Users frequently encounter sexually explicit material accidentally on social media.” This finding is easy to prove—especially for anyone with an X account—and it’s also a direct callback to Reno v. ACLU (1997), when the Supreme Court wrote, “users seldom encounter such content accidentally.”

(A legislator should also include this finding: “the State has a compelling interest in protecting the physical and psychological well-being of minors.” There is no need to reinvent the wheel; the Supreme Court already recognized this as a compelling interest in Sable Communications v. FCC (1989).)

As for the definition itself, an open secret is that the definition can be as broad or narrow as a legislator wants it to be. Ultimately, the goal is not to align the legal definition of social media with the common definition of social media; the goal is to align that legal definition with social media’s specific harms. The statutory definition, therefore, should naturally follow from the legislative findings.

It is easy to know that social media harms kids; it is not so easy to write a legal definition of social media. Lawmakers must sweat the details because they will matter in the inevitable legal battles. When the devil is in the details, angelic intentions are not enough to carry the day.

Photo: Sally Anscombe/DigitalVision via Getty Images

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next