Skip to main content

Implications for Changing Section 230

Section 230 has proven to be a double-edged sword. On one hand, the liability protections afforded to online intermediaries have played a crucial role in the digitization of the global economy by connecting billions of people online; on the other hand, the unfettered growth of online platforms spurred by IML regimes has contributed to a slew of societal woes ranging from mass disinformation campaigns to mental health crises among teenagers. As demonstrated by the recent revelations brought forward by whistleblowers like former Facebook employee Frances Haugen, the vast protections afforded by Section 230 to giant online platforms like Facebook have allowed them to operate with little oversight, transparency, or accountability. Democrats and Republicans in Congress have grown speculative of Section 230; the former have complained it allows Silicon Valley to deflect responsibility for deceptive practices on their sites while the latter have accused online platforms of abusing their legal protection to censor conservative views. Yet, because Section 230 is so foundational to the entire internet economy ranging from startups to Big Tech firms, simply revoking those protections could cause more harm than good. As such, policymakers, advocates, and academics have sought to address the question of Section 230 reform from several angles. 

Potential Consequences of Repealing Section 230 

Whether Section 230’s blanket protection was the correct approach to addressing intermediary liability online or not, it does not change the fact that much of the modern internet was built on its back. As such, an outright repeal of Section 230 would likely be chaotic. Without the liability shield, an easy prediction for one of the first major consequences of Section 230’s repeal would be an onslaught of litigation against online platforms. Section 230 is hardly the only legal protection these platforms have against such lawsuits; as mentioned earlier in this series, court cases such as Cubby v. CompuServe sided with online service providers before the IML regime was even established. Future cases handling intermediary liability without Section 230 are likely to evoke First Amendment arguments as well, potentially paving the way for new precedents set by the Supreme Court.  

The long, drawn-out cases arising out of Section 230’s repeal could disproportionately affect startup and medium-sized online platforms as well. Currently, cases related to online intermediary liability can often be dismissed early on by invoking Section 230 protections, but without it, court battles – and their associated expenses – could prove disastrous for small- and medium-sized enterprises (SMEs) financially, regardless of the merits of the lawsuits.  

Repealing Section 230 would also lead to a far more cautious internet and rushed attempts to restrict discourse online. Expect comment sections on many sites to close for fear of what users might put on them as well as frequent outages on services people use on a daily basis – especially on social media – while companies attempt to find ways to minimize risk to themselves. Content moderators on services like video sharing sites might be overzealous in their decision-making process on what to leave up and what should be taken down; if a video depicting police brutality or rioting is uploaded without a content warning, for example, a moderator might decide to remove the content altogether.   

Without a federal statute mandating a uniform policy across all 50 states, a repeal of Section 230 would also lead to a fragmentation of intermediary liability protections at the state level. Ahead of the curve, the North Dakota legislature attempted to sidestep Section 230 with a bill titled an Act to permit civil actions against social media sites for censoring speech submitted in January 2021. It causes social media websites with over 1 million users to be “liable in a civil action for damages to the person whose speech is restricted, censored, or suppressed, and to any person who reasonably otherwise would have received the writing, speech, or publication.” While the engrossed law would have been immediately preempted by Section 230 if passed, it provided insight into the kinds of challenges intermediaries could face without federal protections. North Dakota’s threshold for liability would be 1 million users, for example, but if Texas passed a law banning platforms with over 50 million users from censoring content, as it attempted to do this past September, online service providers would be faced with a complex web of state-level requirements that would be untenable for sites like Twitter as it is currently constituted.  

While the courts may eventually restore certain protections to online intermediaries through aforementioned avenues like the First Amendment, repealing Section 230 would send shockwaves through platforms that depend on the content users publish on them, like Twitter, YouTube, and TikTok. These services, and the companies who own them, constitute a multi-billion-dollar industry that would undoubtedly affect global markets if their core business model were suddenly upended.  

Reforming Section 230 

Cognizant of the potential ramifications that outright repeal could have while also being aware of the many harms that have come about because of the blanket liability protections afforded to Big Tech companies, many lawmakers have sought a middle-ground approach to Section 230 by proposing a wide variety of amendments to the current IML regime. Many of these proposed reforms fall under similar categories, some of which are already employed in other countries’ IML regimes. 

Sex Trafficking and Sexual Exploitation of Children – As discussed previously, public outcry over Section 230’s protection of online platforms used to facilitate child exploitation (Doe v. America Online) and sex trafficking (Backpage.com, LLC v. McKenna) spurred the passage of the FOSTA-SESTA package in 2018, marking the first significant amendment to Section 230’s liability protections. Lawmakers have continued pursuing amendments in this area, most notably the 2020 Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, which would add a new carve out under Section 230(e) for criminal and civil actions, state and federal, relating to child sexual exploitation law.  

Political Neutrality – Congressional Republicans have been particularly focused on barring online platforms from censoring users’ content as they perceive such moderation as unfairly targeting conservative viewpoints. In 2020, Senators Wicker (R-MS), Graham (R-SC), and Blackburn (R-TN) introduced the Online Freedom and Viewpoint Diversity Act, which would remove the blanket liability protection in favor of specific categories including “self-harm”, “terrorism”, and “unlawful” content. Senator Wicker has also put forward the Promoting Rights and Online Speech Protections to Ensure Every Consumer is Heard (PRO-SPEECH) Act of 2021, which would prevent large internet platforms from impeding access to lawful content and add “political affiliation” as a protected class for online speech alongside race, sexual orientation, religion, and ethnicity.  

Tiered Regulation – One theme of Section 230 reform that has gained traction is the idea of applying additional requirements only on the largest online platforms while continuing to shield SMEs. Such a concept has been particularly popular in the EU, where politicians are exploring regulations specifically aimed at VLOPs and VLOSEs. Bills like the bipartisan Platform Accountability and Consumer Transparency (PACT) Act contain language exempting online providers with an active monthly user count below a certain threshold from their reforms.  

Artificial Intelligence and Discrimination – A major area of concern for lawmakers has been the development and deployment of AI and machine learning by large online platforms to organize, recommend, and remove content without human supervision. In order to streamline and tailor their services to users, many of these online platforms deploy AI-enabled tools and algorithms towards such practices as targeted advertisement, automated content moderation, and engagement ranking. These practices require huge amounts of user data to operate effectively, prompting privacy concerns. In many cases, these practices have also led to discrimination against vulnerable communities – intentionally or not – as the types of content presented to one group may be radically different than those offered to another. This has proven to be one of the more difficult areas of intermediary liability reform as it delves into a number of other conversations including ethics, privacy, bias, and frontier technologies. Members of Congress have proposed multiple bills attempting to rectify these issues this year, including the proposed Civil Rights Modernization Act and the Justice Against Malicious Algorithms Act (JAMAA). In JAMAA’s case, however, the text of the bill is emblematic of the problems faced when trying to hold platforms accountable for the algorithms they deploy. The bill withdraws immunity for providers of an interactive computer service who “should have known such provider was making a personalized recommendation of third-party information”. While personalized recommendations can be used in discriminatory ways, they are also foundational tools to allow users to find and view content that is relevant to them without having to sift through scores of otherwise unrelated posts that would clutter their feed.  

Reinterpreting Section 230 

As mentioned earlier, many cases beginning with Zeran v. America Online have been dismissed by lower courts, and as such, Section 230 has never been tried before the US Supreme Court. Despite the relative status quo that has taken hold over the last 25 years with regards to Section 230 litigation, however, legal jurisprudence is by no means a monolith. In October 2020, for example, shortly after the Supreme Court declined to review the scope of Section 230 by way of the Ninth Circuit case MalwareBytes Inc. v. Enigma Software Group USA, LLC, Justice Clarence Thomas issued a statement clarifying that when the right case came along, he believed the scope of the liability shield should be reexamined. Although the opinions of other Justices are far from transparent on this issue, they could in theory reinterpret Section 230 as is. The Court could, as Justice Thomas suggested, draw a distinction between “publisher” liability and “distributor” liability – a concept eliminated early on by Zeran. Doing so would return intermediary liability more closely to its traditional interpretation under Smith v. California, under which intermediaries distributing third-party content they know is illegal could be held responsible. If the Supreme Court issued a ruling on Section 230 reducing the scope of its protections, it could redefine how future decisions are made at all levels of the judicial branch, though not nearly as much as outright repealing it would.  

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Donate Now
Tags