Content Moderation at the Supreme Court: Free Speech On – and For – Social Media Platforms

David B McGarry

January 16, 2024

First Amendment advocates everywhere are girding their loins for this Supreme Court term. Although the justices admitto being “not the nine greatest experts on the internet,” they will soon rule on several cases that, if decided incorrectly, could curtail free speech significantly in the digital world.

Two cases in particular – Moody v. NetChoice and NetChoice v. Paxton – will determine whether, as a constitutional matter, online platforms have a First Amendment right to moderate content as they see fit. As explained below, this question has profound implications for users as well. These two cases will resolve challenges to respective laws in Florida and Texas that sought to combat social-media platforms’ perceived bias against conservative users. In diverging ways, both laws seek to forbid social media platforms to moderate user-generated content based on viewpoint (among other provisions). The 11th U.S. Circuit Court of Appeals largely blocked Florida’s statute while the 5th Circuit upheld Texas’s, creating a glaring circuit split for Supreme Court resolution.

NetChoice and the Computer & Communications Industry Association (CCIA), trade groups that brought both challenges, provide a thoroughly convincing case that these statutes violate the First Amendment. Although large social-media platforms too often moderate with a censorial, left-wing bias, defending speech rights remains both legally and morally imperative for free-speech advocates of all persuasions.

As oral arguments loom, the Taxpayers Protection Alliance offers the following analysis.

Platforms Have a First Amendment Right to Promote, Demote, and Remove Content

“When private parties refuse to publish speech, they are engaged in protected First Amendment activity, as numerous cases of this Court hold,” NetChoice and CCIA write in a brief. “The notion that the government may compel private speech in the name of quelling ‘censorship’ turns the First Amendment on its head.”

All individuals and businesses have a constitutional right to decline to endorse, publish, or associate with speech they object to. As Supreme Court Justice David Souter wrote for the majority in Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston (1995), “one important manifestation of the principle of free speech is that one who chooses to speak may also decide ‘what not to say.’” Social-media platforms exercise this right when publishing and promoting – or demoting or banning – user-generated content. The government may not determine for platforms what content they ought to deem, decent, or worthy of the public’s attention.

As the trade groups explain at length, the Founders well recognized right of publishers to decline to platform others’ work. That online platforms have dissimilarities from newspapers of yore does not erase their constitutional rights. Due to scale, platforms moderate after user speech becomes published (in contrast to the pre-publication editing common in news outlets or magazines), but their editorial choices nonetheless remain inherently expressive – and constitutionally protected.

Uncommon Common Carriers

Florida and Texas seek to classify social-media platforms essentially as common carriers and therefore without ordinary speech rights. Unfortunately for the states, social media fits no established description of common carrier besides “politically disfavored.”

Common carriers “hold themselves out as affording neutral, indiscriminate access to their platform without any editorial filtering” (per Judge Sri Srinivasan). But social media have never done so. They are no “dumb pipes” and do not simply aggregate user-generated posts for others to consume. Conversely, platforms’ content moderation, an inherently discriminatory task, is a core function. They invest untold funds to build algorithms and employ moderators that sort, present, promote, and demote user-generated content.

Note that moderation involves far more than a binary choice to publish, or not to publish, certain types of speech. As the Twitter Files demonstrate, moderators can downrank platformed speech into obscurity. Algorithmic amplification has immense power to make user posts viral or to bury them.

What content platforms consent to publish and promote (two distinct questions) shape their public image and the user’s experience. It is how platforms compete with one another. Facebook and 4chan provide wildly dissimilar products. And most moderation decisions have little to do with politics. Rather, they concern how to provide each user with maximally engaging content and keep advertisers happy.

Elon Musk’s acquisition of Twitter, which he rebranded as X, provides a case study. Muskian content moderation, seemingly more tolerant of fringy right-wing speech, has driven many users to abandon the platform for competitors such as Mastodon and Meta’s Threads.

Executives at Meta’s Threads platform say they look skeptically on all news content, and instead prefer apolitical content such as “sports, music, fashion, beauty, entertainment, etc (sic)”. This business model – emphasizing culture and comity – will attract those who weary of the political food fights so common on other social media platforms and alienate those who feed on them.

NetChoice illustrated these dynamics well in a filing in another recent case (citations omitted):

Most online providers have adopted terms of service, content policies, and guidelines governing use of their services. These policies can vary significantly. For example, Reddit’s Content Policy states that Reddit “is a place for creating community and belonging, not for attacking marginalized or vulnerable groups of people” and so users who “promote hate based on identity of vulnerability will be banned.” Truth Social, in contrast, allows users to report certain types of content – such as “content that depicts violence or threat of violence” – but expresses a “preference” that “the removal of users or user-provided content be kept to the absolute minimum.” The New York Times requires the “use [of] respectful language” and tells users (among other things) to “debate, but don’t attack.” The Washington Post prohibits content that is “hateful,” “contains advertising,” is “in poor taste,” or “is otherwise objectionable.” Such private house rules enable providers to build communities and control their services through the exercise of discretion – and critically, virtually all extend far beyond permissible government regulation.

This confusion stems largely from the fact that most political commentators, like the Supreme Court justices, are not the “greatest experts on the internet.” They failed to grasp that their everyday social-media experiences depended entirely on innumerable instances of content moderation (e.g., sorting and recommending interesting content or downranking truculence). Then, as platforms’ left-wing biases became apparent, many conservatives began to conflate platforms’ discrimination against right-wing speech with content moderation per se. The proverbial baby — boring old content moderation, without which no platform can function — went flying along with the censorial bathwater.

Moreover, as the Phoenix Center’s Lawrence J. Spiwak notes, social-media services are technologically dissimilar from traditional common carries. “[I]nternet platforms do not engage in providing interstate common carrier telecommunications services, and therefore are not currently subject to [Federal Communications Commission (FCC)] subject matter jurisdiction under Title II,” Spiwak writes. In fact, he continues, “the information infrastructure that carries [internet platforms’] services to end-users is not their own, but that of communications firms regulated under FCC jurisdiction.”

While most common-carrier regulatory regimes apply to entire industries, Florida and Texas’s respective laws would only target a few disfavored market actors. Further, as Spiwak argues persuasively, existing common-carrier law would translate poorly if applied to social media. All this suggests the states’ have embraced common carriage for its usefulness as a political cudgel, not an apposite regulatory formula.

Market Success Does Not a Monopoly Make

Many argue that social-media platforms maintain monopolies, which ought to allow government to regulate their content moderation to maintain de facto free speech in America.

A social-media landscape dotted by myriad platforms seems utterly incongruent with this assessment. Facebook, Twitter, Instagram, TikTok, Threads, Reddit, YouTube, Mastadon, Bluesky, Truth Social, Parler, Tumbler, Discord – monopolies galore!

Consider X, often fingered as a monopolist. As noted in National Review last year:

According to Pew Research data for 2021, fewer than one in four Americans used [X], which ranked seventh in overall popularity, well behind leaders YouTube (81 percent) and Facebook (69 percent). Even Pinterest (31 percent) and LinkedIn (28 percent) are more widely used. What’s more, over 95 percent of Twitter’s content is produced by only a quarter of its users. In total, Twitter has 206 million daily active users – Facebook (including Facebook Messenger) has nearly two billion.

Judge Andrew Oldham, writing for the 5th Circuit panel in NetChoice v. Paxton, refines this clearly absurd argument. Oldham held that “each Platform has an effective monopoly over its particular niche of online discourse” (emphasis added). However, this argument disintegrates nearly as quickly as the less sophisticated theories advanced by those less sophisticated than the judge.

Oldham states, for example, that “political pundits need access to Twitter.” But although (now) X boasts a disproportionately politically active and influential user base to which political commentators certainly want access, it has no monopoly. In fact, it faces tough competition, particularly for the attention both of old and young users. (And the Musk era proves that users will leave a “monopolist” platform should that platform significantly alter its policies.)

Right-wing commentators such as Ben Shapiro achieve seismic reach on Facebook, a platform with an older audience. In 2021, Shapiro’s news outfit, the Daily Wire, “received more likes, shares and comments on Facebook than any other news publisher by a wide margin,” according to an NPR analysis. Meanwhile, Pew research reports that the share of U.S. adults who rely on TikTok for news has nearly quintupled since 2020, rising from 3 percent to 14 percent. A third of Americans ages 18 to 29 do so.

Section 230 Doesn’t Remove Platforms’ First Amendment Rights

Enacted in 1996, 47 U.S. Code § 230, known better as Section 230, protects interactive computer services (including social-media platforms) from civil liability for user-generated content. Some, including the 5th Circuit panel, say the statute establishes Congress’s intent to treat social media as mere conduits of speech, not publishers whose editorial choices include some degree of protected expression.

This argument fails totally. First off, if Congress intended Section 230 to strip platforms of their First Amendment speech rights, the law itself violate the First Amendment (“Congress shall make no law…”).

But Congress intended no such thing. In fact, Section 230 seeks to promote content moderation. It provided that platforms’ decisions to downrank or deplatform some user speech do not constitute a liability-inducing endorsement of speech left unmoderated. Many observers do not understand that unmoderated social media devolves quickly into quagmires of (legally protected) abuse, anger, and general unpleasantness; relatively few – but energetic – bots and trolls easily can make any platform unattractive to the average user. Section 230 serves as an encouragement of good online governance, not a defenestration of platforms’ rights.

What’s more, other – more relevant – law suggests that the Congress that passed Section 230 recognized the inherent differences between traditional common carriers and internet platforms. As the 11th Circuit noted when it halted Florida’s law, “The Telecommunications Act of 1996 explicitly differentiates ‘interactive computer services’ – like social-media platforms – from ‘common carriers or telecommunications services.’”

Texas and Florida Want the Government to Subsidize One Side in the Marketplace of Ideas

Texas and Florida legislatures intended expressly to promote a specific viewpoint, conservatism. And they advertised their intentions brazenly. This alone ought to raise judges’ suspicion. Florida designed their law “to combat what it perceived to be a concerted effort by ‘big tech oligarchs in Silicon Valley’ to silence ‘conservativespeech” and to “ensure that the state’s preferred messages reach a broad audience,” as NetChoice and CCIA write. Texas officials said essentially the same things.

That the states limited these laws to cover only massive platforms – i.e., the ones generally considered to harbor liberal biases – further strengthens this contention. These laws do not apply to smaller, but still highly successful, right-wing platforms such as Truth Social, which reportedly engages vigorously in viewpoint-based content moderation.

Unintended Consequences Galore

Texas and Florida’s statutes would trigger myriad unintended consequences, degrading social media’s usefulness greatly.

Consider the following two examples.

The Texas law outright bars platforms from banning or otherwise disfavoring any speech based on viewpoint. This provision protects pro-terrorist, pro-KKK, or pro-Antifa posts as much as it does pro-conservative ones.

Florida’s law contains language insulating “journalistic enterprises” from content moderation. This too, would backfire on conservative lawmakers. As the 11th Circuit notes, it “would prohibit a child-friendly platform like YouTube Kids from removing – or even adding an age gate to – soft-core pornography posted by PornHub, which qualifies as a ‘journalistic enterprise’ because it posts more than 100 hours of video and has more than 100 million viewers per year.”

Author’s Note: For readability, this post has largely omitted direct references to relevant caselaw. Those interested should refer to Miami Herald Publishing Company v. Tornillo (1974), Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston (1995), Manhattan Community Access Corp. v. Halleck (2019), and 303 Creative LLC v. Elenis(2022), among many others.