Skip to main content

The Future of Content Moderation and Intermediary Liability

This is Part 1 of an ongoing series to detail the policy considerations for #Section230 and #ContentModeration. In this post, BPC covers the background of Section 230 in the US.
Share This Quote
Read Next

Part 1- US Considerations

Buried within the Communications Decency Act (CDA), which was itself tucked away in the larger Telecommunications Act of 1996, included a twenty-six-word phrase that would radically influence the development of the Internet: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This sentence was part of Section 230 of the CDA, the remainder of which clarified exceptions for federal criminal law, intellectual property law, and electronic communications privacy laws.

Since its enactment into law in 1996, Section 230 has shaped how the issue of intermediary liability (IML) would be managed in the United States. It has formed the bedrock under which massive online platforms like Facebook, Twitter, and YouTube were built on. By expressly shielding online service providers from being held liable for the content users publish on their platforms, Section 230 allowed them to grow exponentially during the first two decades of the 21st century and define the Internet Age.

In the years since Section 230’s passage, other jurisdictions adopted or began developing systems of intermediary liability which formed the basis for most online interactions and commerce. While the specific provisions of IML regimes may differ in their approach, they generally address common elements such as:

  • Protection for companies against being considered a speaker or publisher of user information
  • Requirements for companies to remove illegal content when becoming aware or notified
  • Requirements for companies to remove some harmful or objectionable content
  • Protections for companies who undertake measures to remove harmful or objectionable (but not illegal) content
  • Protections against being compelled to undertake monitoring

While policymakers in 1996 the US may not have been aware of how consequential Section 230 would end up being, its reverberations throughout societies worldwide have informed how many countries now seek to approach IML protections, ranging from crafting brand new regimes, to enacting comprehensive reforms, and even considering scrapping them entirely. All three approaches, in addition to keeping the status quo, have their own major implications for the future of online content and its moderation, and prevailing business models of major tech companies.

IML in the United States


Although nowadays intermediary liability is so inextricably linked to online content and platforms, the concept far predates the Internet. Before 1996, courts frequently turned to the standard set in the 1959 Supreme Court case Smith v. California, in which the Court overturned the conviction of a Los Angeles bookstore owner who had unwittingly sold a book that contained erotica in violation of a city ordinance. The decision in this case, which was clarified by future rulings, established that distributors are only liable if it can be reasonably established that they should have known about the illegal content. 

This legal precedent was largely satisfactory for determining intermediary liability until the advent of the Internet; as increasing numbers of people came online, however, the traditional approach to IML became jeopardized by competing court decisions. In the 1995 case Stratton-Oakmont v. Prodigy Services, Co,  for example, a New York state trial court found that an online bulletin board was liable as a “publisher” for content on its site found to be fraudulent and further held that the site’s rudimentary content moderation practices constituted “editorial control” over its content. The ruling contradicted the finding in a different case, Cubby, Inc. v. CompuServe, Inc, which found that the company CompuServe, which ran an online message board, was nothing more than a conduit that could not reasonably have been assumed to have known about what its users were posting.

Oddly, these two cases set the legal precedent when viewed together that online service providers were more liable if they made attempts at content moderation than if they did not. Faced with this conundrum that would certainly adversely affect the nascent internet economy, members of Congress drafted Section 230.

Section 230 Provisions

As intimated earlier, Section 230 created a system of civil liability protection for online platforms by shielding them from being held liable as the speaker or publisher of information or content posted by users and preventing them from being held liable for “good faith” actions to remove or facilitate the removal of objectionable content. These protections apply to “interactive computer services”, defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the internet and such systems operated or services offered by libraries or educational institutions”. It also defines an “access software provider” as a “provider of software (including client or server software) or enabling tools that do any one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content”.

In its current form, Section 230 further stipulates that none of its provisions may impair the enforcement of federal criminal law, intellectual property law, or state or federal sex trafficking law. The specific responsibilities of intermediaries regarding intellectual property law enforcement are defined in the Digital Millennium Copyright Act, which creates a system of notice and takedown for infringing content.

Legal Challenges to Section 230

Section 230’s first major challenge came in the 1997 Appellate Court case Zeran v. America Online, in which plaintiff Kenneth M. Zeran sued America Online for failing to remove libelous ads posted by users on its platform related to the Oklahoma City bombing in a timely manner. The court ruled in favor of AOL, citing the fact that it would be unreasonable for the provider to screen the millions of posts put on their site every day. This ruling, and its accompanying justification, cemented Section 230’s function as a liability shield – particularly for large online platforms hosting content from millions (and eventually, billions) of users daily.

One particular area of contention since Section 230’s inception relates to the liability of online providers for sex trafficking and the sexual exploitation of children that takes place on their platforms. Section 230 gained notoriety in this regard after the Florida Supreme Court ruled 4-3 in the 2000 case Doe v. America Online that the liability shield protected AOL. The case involved a minor whose mother claimed that a man recorded and photographed minors engaged in sexual activity in 1994 and posted the illicit content in AOL chatrooms. Relatedly, the First Circuit of the United States Court of Appeals generated controversy in 2016 after it dismissed a case brought forward by victims of sex trafficking against, a classified advertising website that maintained an “Adult Services” section. In the case, LLC v. McKenna, the plaintiffs argued that Section 230 did not apply because they were not seeking to find Backpage liable as a publisher of the content, but rather that the design of its website and lack of verification features would make it reasonable to assume that sex trafficking could and would occur through the site.

Following that ruling, separate investigations by the Washington Post and the Senate Homeland Security and Government Affairs Committee found that Backpage concealed evidence of criminality by systemically editing its ads for adult content and aggressively marketed sex-related ads despite the company’s previous insistence that it had no role in the content of the ads posted on its site. The ensuing public outrage prompted Congress to introduce the Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA) in 2017. The bills, which were signed into law on April 11, 2018, modified Section 230 to exempt service providers from immunity when dealing with civil or criminal crimes related to sex trafficking.

Court rulings have also sought to limit the scope of Section 230, as was decided in the 2008 case Fair Housing Council of San Fernando Valley (FHC) v. The Ninth Circuit concluded that, because actively created pre-populated question-and-answer choices that forced the substance of the content for subscriber’s profiles, it “materially contributed” to the alleged unlawfulness and was not shielded from violating anti-discrimination rules. More recently, the Ninth Circuit built on this precedent through its 2019 decision in v. City of Santa Monica, in which the court ruled that a Santa Monica city ordinance imposing liability on home-sharing services that failed to verify the legality of individual listings did not contravene Section 230 as it only required verification at the point of processing a transaction internally – not of monitoring third-party content in general.

These two cases are in the minority when it comes to litigation challenges to Section 230. By and large, court cases seeking to reduce the scope and/or authority of Section 230 have mostly failed to do so, prompting, as was the case in the aftermath of the Backdoor decision, lawmakers to craft amendments to the CDA with the intention of reforming its provisions or deleting it entirely.

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now