Skip to main content

Summarizing the Section 230 Debate: Pro-Content Moderation vs Anti-Censorship

Overview:

The rise of social media has indisputably defined the 21st century. In 2021, 72% of Americans enjoy social media as a place to learn, connect, share, and express themselves—whether that’s through photo-sharing sites, blogging apps, discussion forums, or shopping platforms. It has advanced human progress in many areas of life. Despite immense popularity, there is bipartisan consensus there are negative impacts of social media such as concerns over youth mental health, misinformation, censorship, and the incitement of violence.

In particular, there is some concern that social media platforms play an increasingly prominent role in the radicalization of extremists, specifically young adults, who may search, consume, and spread harmful content with like-minded individuals online. On May 18, 2022, New York Attorney General Letitia James launched an investigation into Discord, Twitch, 4chan, and other platforms for their alleged role in online radicalization of the young man who committed a mass shooting at a Buffalo, NY grocery store. He had been particularly active on social media leading up to the attack. Sadly, on May 24, 2022, another troubled young man killed 21 at an elementary school in Uvalde, TX after posting for weeks on Yubo that he was set on violence. As America continues to grieve these national tragedies, the aftermath has raised many ethical questions about digital accountability and reinvigorated the debate surrounding online content moderation, which is governed by Section 230 of the Communications Decency Act of 1996.

The overall debate surrounding Section 230 is embedded with issues of social media liability, free expression, and content moderation. Both Democrats and Republicans in Congress have grown speculative of Section 230, although for different reasons involving pro-moderation versus anti-censorship arguments. Policymakers’ concerns have also been amplified by recent revelations by former Facebook employee Frances Haugen who argued that the company’s algorithms are dangerous and Congressional action is needed. However, many, including the tech industry, argue that advanced algorithms have been employed to restrict and catch troublesome content, although there are limitations. For example, Twitch removed the live-stream of the Buffalo attack in less than two minutes, although millions later saw the viral footage and social media companies struggled to stop its circulation. As the debate around online radicalization and social media intensifies, we take a look at Section 230.

Definitions:

Section 230 has two key provisions that govern the Internet:

1. Section 230(c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Known as the 26 words that created the internet, the first section states that social media platforms are protected from being legally liable for the content users post on their platforms. The basic rule is that users are responsible for their own actions and speech on the Internet.

2. Section 230(c)(2): “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”

The second part, often referred to as the Good Samaritan exemption, allows platforms to police their sites for illegal content and other forms of online abuse; however, they are granted immunity if they do not. The basic rule is that online platforms are encouraged to moderate harmful content in good faith.

Current Debate:

Pro-Moderation

In an interview with the New York Times, President Biden has said “Section 230 should be revoked, immediately” because it allows tech companies to propagate “falsehoods they know to be false”.

To what degree should social media platforms be responsible for hosting and promoting harmful content that could incite offline violence as we saw in Buffalo and Uvalde? These shootings are not the first-time social media platforms have come under scrutiny for their correlation with violent attacks. In 2020, many rioters who stormed the U.S. Capitol organized and broadcasted radicalizing content before, during, and after the insurrection via social media. The rise of misinformation, disinformation, conspiracy theories, hate speech, and graphic violence online have led to increased calls for social media platforms to proactively moderate the dissemination of this problematic material and protect the public at-large. Facebook whistleblower, Frances Haugen, testified to the Senate saying, “My fear is that without action, divisive and extremist behaviors we see today are only the beginning.” She urged lawmakers for accountability and transparency to help society better understand what is happening on these platforms.

Anti-Censorship

Former President Trump tweeted, “REPEAL SECTION 230!!!” after issuing an Executive Order on Preventing Online Censorship.

Contrary to those who seek stronger content moderation policies, others argue that increased liability will be a slippery slope that could ultimately harm freedom of expression and innovation. The safe harbor protections given to online platforms have been the backbone of the innovation and growth of the Internet with minimum government regulation. Holding social media companies responsible for unlawful user behavior could disproportionately hurt startups or smaller platforms that do not have sophisticated content moderation systems or cannot afford liability charges. To facilitate communication and free expression—social media’s primary functions— many want to avoid increased content moderation which could incentivize severe censorship. This is especially difficult in today’s political environment when each side seems intent on its own ‘facts.’ Claims that content moderation infringes First Amendment rights are central to Elon Musk’s vow to restore free speech protections to Twitter if he buys the company and takes it private.

Social Media and “Good Faith” Moderation

In order to maintain their reputations and build societal trust, most mainstream social networks have deployed proactive community guidelines, developed sophisticated algorithms, and achieved a successful amount of harmful content reduction. The sheer volume of content on these platforms requires a hybrid approach to content moderation—a combination of both AI and human content moderators. Take the vast scale of YouTube for example, which uploads 500 hours of content every minute; it is nearly impossible to filter and remove every harmful post in real time. In response to the rise of online hate speech and violence, social media companies often outsource their content moderators to third-party contracting companies. Private partnerships allow greater flexibility and thus may be preferable to government regulation. Nonetheless, concerns over AI bias, data privacy, and the variances in human language have increased bipartisan calls for algorithmic transparency.

Current Discussion on Capitol Hill

The current political climate toward Section 230 is divided along party lines. Liberal lawmakers argue that Section 230(c)(1) encourages the spread of harmful content while Big Tech deflects accountability. On the other hand, conservative lawmakers claim that Section 230(c)(2) allows Big Tech to unfairly censor conservative opinions which violates free speech. Big Tech has responded by spending millions of dollars in congressional lobbying efforts to defend Section 230. There are currently over 25 pending bills aimed at repealing or reforming Section 230 in the 117th United States Congress. Whether it’s proposing case-specific carve-outs or regulating AI algorithms or defining what constitutes “bad faith” moderation, Congress is cracking down on Section 230. As lawmakers race to pass landmark reforms targeting Big Tech before the 2022 midterm elections, the tech industry is concerned that increased regulation could inadvertently stifle innovation.

High-Profile Lawsuits Challenging Section 230:

*Netchoice and the Computer & Communications Industry Association are both lobbying associations representing the technology industry.

1. Dobbs v. Jackson Women’s Health Organization: on June 24, 2022, a landmark decision by the U.S. Supreme Court held that the Constitution of the United States does not protect abortion and overruled Roe v. Wade. Many pro-choice activists are concerned about increased forms of surveillance and data collection on digital platforms such as period-tracking apps. Will social networks be held accountable for harboring people that seek abortions, or will they aid abortion prosecutions by releasing user data to police? Right now, platforms are protected by Section 230, but current federal proposals could have unintended side effects in wake of new abortion laws.

2. NetChoice, LLC v. Paxton: Texas state law HB20 bans “viewpoint” censorship on large social media platforms. The plaintiffs, NetChoice and CCIA, state that HB20 is unconstitutional because it forces private companies to disseminate objectionable content even if it violates the platforms’ standards. After a lengthy battle of back-and-forths between NetChoice and the courts, the Supreme Court blocked the Texas law in a 5-4 vote on May 31, 2022.

2. NetChoice, LLC v. Attorney General, State of Florida: Florida state law SB7072 prohibits social media platforms from removing political candidates and blocking users’ content. The plaintiffs, Netchoice and CCIA, argue that SB7072 overrides the editorial judgment of social media providers with overreaching governmental regulation. On May 23, 2022, a federal appeals court released an opinion ruling that key provisions in the law are unconstitutional—a decision which similarly aligns with the Supreme Court’s decision against the Texas law’s parallel case.

Conclusion

Despite social media companies’ initiatives to update their policies and voluntarily police their sites, these platforms continue to grapple with how to identify, mitigate, and address the spread of harmful content. How can society combat the spread of harmful content while enabling free speech? Policymakers, advocates, academics, and industry experts attempt to address this question in a way that protects the American people without infringing the constitutionally protected speech of either providers or users.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags