Skip to main content

Gonzalez v. Google: Implications for the Internet’s Future

On October 3, 2022, the Supreme Court announced that it would hear two cases that could fundamentally change the future of the modern internet. Gonzalez v. Google and Twitter, Inc. v. Taamneh involve both the Anti-Terrorism Act and Section 230 of the Communications Decency Act— which shields tech platforms from lawsuits for hosting and moderating user content. Section 230 is one of the most important laws in tech policy, and this will be the first time the Supreme Court has interpreted its scope. Back in 1996, at the dawn of the internet age, Section 230 was created to encourage the development of the internet while fostering a safe online environment where users can connect and civilly express themselves. More than 25 years after its enactment, there is some concern that social media platforms play a role in the radicalization of extremists, which can lead to offline violence. The legal question presented here is whether Section 230 protects online services from lawsuits based on recommendations made by their algorithms. As Section 230 makes its way to the Supreme Court, what’s at stake, and should Congress step in first?

Understanding Section 230

Section 230 has two key provisions that govern the internet:

1. Section 230(c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Known as the 26 words that created the internet, online platforms cannot be held liable for the words and actions of their users. These legal protections were created to protect the innovation of the internet by preventing an influx of lawsuits for user-generated harm. Without Section 230, social media companies could be sued for every message and post made on their services.

2.  Section 230(c)(2): “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”

Known as the Good Samaritan provision, this clause was enacted to provide broad immunity to platforms when they choose to moderate content in good faith. Lawmakers wanted to avoid a lawless no-man’s-land on the internet and ensure that platforms are safe online environments – and that they are not penalized in the endeavor to do so. Platforms are encouraged to voluntarily block and screen objectionable content; however, they are granted immunity if they do not.

Understanding Algorithmic Content Moderation

In today’s highly advanced world, algorithmic content technologies play an increasingly prominent role in our everyday lives. Under Section 230 protections, online services have built and deployed powerful content moderation systems, combining both automated and human moderators. Online platforms use automated filtering to process user-generated data, retain user attention, and detect online abuse. More controversially, algorithms also recommend content to users based on their preferences and search history. For example, a user who watches a cat video on YouTube may see similar content (e.g., dog or pet videos) recommended to them in the future. A person’s social media newsfeed is the result of algorithmic recommendations.

One question before the Supreme Court with Gonzalez v. Google is whether, under Section 230, a website or service “develops” content—and therefore loses immunity—when it uses an algorithm to recommend terrorist content based on a user’s viewing history. As the Artificial Intelligence Law and Policy Institute argued in an amicus brief before the Ninth Circuit, the Court must determine if content recommendation algorithms are neutral facilitation tools or a means of developing user-tailored content. If the Court determines that recommendation algorithms do fall outside the scope of Section 230, the ruling would have enormous implications for how websites operate.

A final issue raised by the lower court in Gonzalez is whether Congress should impose additional requirements on online services given their increased ability to moderate harmful content. For example, many websites that leverage new technologies have achieved a successful amount of harmful content reduction, although there are limitations. As the lower court writes, “Congress may well decide that more regulation is needed,” given platforms’ improved capabilities to proactively screen for dangerous content. This could incentivize online businesses to invest in the research and development of more advanced content moderation technologies.

Gonzalez v. Google

In early October 2022, the Supreme Court agreed to take up a case that puts into question online services’ liabilities for amplifying terrorist organization content, which allegedly radicalized ISIS sympathizers into carrying out coordinated terrorist attacks around Paris in 2015 which killed Nohemi Gonzalez and many others. The Supreme Court will hear challenges to companies’ immunity under Section 230 of the Communications Decency Act of 1996, which provides immunity for certain claims brought against online services. Importantly, this also marks the first time that the Supreme Court has decided to take up Section 230 in their docket. The Supreme Court’s ruling in Gonzalez v. Google will most certainly have major implications for the future of the internet.

The plaintiffs of this case argue 1) the defendant violated the Anti-Terrorism Act by communicating ISIS messaging, radicalizing recruits, and furthering their mission, and 2) should be held liable for this content because it was promoted through targeted recommendations on its website. The plaintiffs allege that this promotion constituted Google’s aiding and abetting in the promotion and recruitment efforts of these terrorist organizations. Through this, plaintiffs argue that YouTube provided “material support” to ISIS – that they had knowingly permitted ISIS to post hundreds of videos on YouTube, and that the services provided by this algorithm, which provided access to viewers to other similar content, “were critical to the growth and activity of ISIS.”

The defendant will argue 1) Section 230 bars liability for third-party content published on their websites, and 2) Section 230 does not exempt recommended content from liability protections, as set by precedent in Ninth Circuit and Second Circuit courts’ rulings. Google’s brief in opposition to the petition for the Supreme Court to review this case lays out these defenses. It also defends that its YouTube technology did not violate the ATA or provide substantial assistance to the terrorist organization.

The legal question at the heart of this matter is to determine the following:

“Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?”

If the Supreme Court rules in favor of the plaintiffs, it could mean that online platforms would have to change the way they operate to avoid being held liable for the content that is promoted on their sites. It could lead to a more censored internet, where content is only published on websites if there are no potential repercussions. As acknowledged by the Court’s petition, the implications of addressing Section 230, as it applies to algorithm-generated recommendations, are significant as they are employed in almost every instance of internet usage; consequently, the effects of this ruling could alter technology companies’ business models, and the internet as we know it.

Twitter, Inc. v. Taamneh

Alongside Gonzalez, the Supreme Court also picked up a separate but related lawsuit – Twitter, Inc. V. Taamneh – which questions online services’ accountability for content that led to the death of Nawras Alassaf, a Jordanian citizen, during an ISIS-affiliated attack in Istanbul in 2017. The family sued Twitter, Google, and Facebook, alleging the companies failed to control terrorist content proliferated on their sites arguing that platforms provided infrastructure wherein ISIS could promote posts, which in turn supported their operations; the businesses then benefited by deriving revenue from these targeted ads.

The Supreme Court will be answering whether 1) the online service that regularly detects and deters terrorists from using the services “knowingly” aided terrorism by not taking greater steps to prevent such use and 2) whether the service should be held liable for aiding and abetting terrorism under the Anti-Terrorism Act amended by the Justice Against Sponsors of Terrorism Act (JASTA). The Chamber of Commerce has filed an amicus brief at the Circuit Court, supporting Twitter’s argument that this decision expands the scope of the Anti-Terrorism Act beyond Congress’ intent for the law. It claims that businesses should not be held liable for unidentified actors using the product to further terrorism, “otherwise, businesses would be burdened with the insurmountable task of actively policing their entire customer base, which in the case of Defendants, numbers in the billions.”

There are implications on content moderation and whether companies could be liable for violence, criminal, or defamatory activity promoted on their websites. If the Court rules in the plaintiff’s favor, then despite the platform’s ongoing work to prevent most terrorist content from its site, mishandling of this kind would lead to lawsuits. Greater content moderation policies and restrictions on content publishing would need to be implemented, or this will incentivize platforms to apply no content moderation to avoid awareness.

Regardless of how either Gonzalez or Twitter, Inc. are decided, these cases are sure to shape the social media regulatory landscape in a multitude of ways – from the way these companies operate to the discourse around social media regulation and free speech.

Background

Supreme Court

The Supreme Court has not interpreted the limitations and exceptions of the law; however, Justice Clarence Thomas has previously voiced skepticism toward Section 230. Justice Thomas expressed dissatisfaction with the Supreme Court’s decision this year to not further review Jane Doe v. Facebook, Inc., stating, “Assuming Congress does not step in to clarify 230’s scope, we should do so in an appropriate case.”

There are lower court cases that are immediately relevant to consider and important to the interpretation of the law. In Force v. Facebook (2019), a lawsuit very similar to Gonzalez v. Google, U.S. citizens of terrorist attacks in Israel alleged Hamas posted content on Facebook that actively encouraged terrorist attacks in Israel. The court ruled that Section 230 immunized Facebook from liability as a publisher making editorial decisions. The Supreme Court declined to hear the case.

Whether immunity can be given if Google’s recommendation algorithm is a “neutral” tool that does not directly add to content has precedence in recent court cases. The Dyroff v. Ultimate Software Group Inc. case shows that an online service encouraged users to share first-hand experiences via an open-ended text box and used algorithms to recommend groups based on user posts. The service had used “neutral” tools to facilitate communications and was therefore immune when a user posted about opportunities to buy drugs and later overdosed on fentanyl-laced heroin. In contrast, those who oppose platform immunity, including the dissent in Gonzalez, argue that a website or online service “affirmatively amplifies” content when its recommendation algorithm directs terrorist content to users it knows are susceptible to acting upon it. In FHC v. Roommates.com, a tenant-landlord matching service was not immune from discrimination claims because rather than recommend tenants based on users’ inputs into an open-ended text box, the website prompted tenants to answer questions about their sex, sexual orientation, and the number of children staying with the applicant.

Congressional Landscape

At the federal level, the Section 230 debate has been a major partisan issue. Both Democrats and Republicans in Congress have grown speculative of Section 230, although for different reasons involving pro-moderation versus anti-censorship arguments. Democratic lawmakers argue that Section 230 encourages the spread of harmful content while technology companies deflect accountability. Conversely, Republican lawmakers say that Section 230 allows these companies to violate free speech by unfairly censoring conservative viewpoints, such as de-platforming a sitting Republican President.

Nonetheless, for all the talk about repealing or amending Section 230, legislation affecting social media platforms has proved very difficult to pass. More than 20 bills aimed at repealing or reforming Section 230 were introduced in the 117th United States Congress, but none of them came close to passing. A bipartisan consensus around Section 230 is unlikely anytime soon, but the gridlock may be stirred when the Gonzalez v. Google case draws national attention and influences federal policymaking. The Supreme Court’s ruling could fundamentally change the scope of Section 230, which will put pressure on Congress to act. If legislators do want to pursue effective legislative proposals, an accurate understanding of how these platforms work is required, and there are concerns that legislators may need more technical expertise to navigate the issue.

What’s at Stake?

At the heart of both cases is how online services moderate their content. While it is difficult to predict the outcome of these cases, the implications of changing Section 230, even a narrowly tailored opinion, are significant. The rulings could open all kinds of new doors for future litigation, prompting an influx of legislation by state and federal legislators and fundamentally changing the future of the internet.

The Supreme Court’s decisions on these cases will be pivotal for Section 230 liability protections for online platforms. On one extreme, the Court could adopt a broad view of Section 230, strengthening its protections. This would likely create a more passive internet at first, where content is unregulated or unmoderated. There would be a greater incentive not to restrict content because this would create non-neutral censorship of speech and potentially violate First Amendment rights. Consequently, an influx of hateful or dangerous content may surface more easily and disrupt the safeguards companies designed for their users. Alternatively, the Court may narrow the protections under Section 230. This would likely incentivize greater content moderation, potentially by teams of lawyers, to avoid legal liabilities. Content review in this capacity would be financially burdensome, especially for small businesses. Their ability to ensure every single content published on their site is not violating the law will be costly. This could also lead to a more closed internet, where user content is excessively removed and social channels are limited in scope. For example, in 2021, CNN shut down its Facebook page after an Australian court ruled that news publishers could be held liable for comments on article links.

Just as the drafters of Section 230 did not contemplate sophisticated recommendation algorithms, reformers must contemplate repercussions not only for social media platforms but also for future online platforms. For example, open-source repository platforms give users space to share or adopt open-source code for constructing the internet. These platforms are often considered beneficial to innovation but are at risk of facing greater liability. If Section 230 is interpreted to apply to open-source platforms, they will likely take a more stringent moderation approach and restrict collaboration to maintain their immunity. On the other hand, broader interpretations of Section 230 could disrupt efforts to develop trust and safety infrastructure, such as current acceptable use and moderation practices. Self-regulation of “web3.0” may also be interpreted differently after the Court’s rulings. Questions remain about how to evoke liabilities in decentralized web 3.0 spaces or what could be the metaverse.

The Future of Section 230

While Gonzalez v. Google is the first case the Supreme Court hears on Section 230, more cases are likely coming. Many U.S. states are responding to failed federal regulations by taking the Section 230 issue into their own hands. Recently, high-profile laws in Texas and Florida broadly ban “viewpoint” censorship and deplatforming of political candidates. Tech industry lobbying groups, NetChoice and the Computer and Communications Industry Association, challenged both laws arguing they are unconstitutional because they violate the First Amendment rights of private companies to exercise editorial judgment. After lengthy battles of back-and-fourths between the tech industry and the courts, both laws are currently blocked while their cases are pending petitions from the Supreme Court. A few months ago, the Texas law (HB20) was upheld as constitutional by the Fifth Circuit Court of Appeals, which conflicts with a decision by the Eleventh Circuit Court of Appeals, which ruled Florida’s law (SB7072) to be unconstitutional. The contradictory rulings, or “circuit split,” among the appeals courts will make it likelier for the Supreme Court to intervene and weigh in.

Conclusion

Now that Section 230 is finally making its Supreme Court debut, tech policy faces a very big moment. Knowing that most Americans are online enjoying the many benefits of social media and the open internet, there is no doubt that the courts and Congress should carefully review the entire legal landscape when trying to address the harms. BPC will continue to monitor these proceedings and developments in Congress as they unfold in 2022-2023.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags