Skip to main content

Summarizing the Amicus Briefs Arguments in Gonzalez v. Google LLC

Today, February 21, the Supreme Court hears oral arguments in Gonzalez v. Google, a case that could fundamentally change the future of the modern internet. Gonzalez v. Google involves Section 230 of the Communications Decency Act— which has thus far shielded tech platforms from liability for hosting and moderating user content. Section 230 is one of the most important laws in tech policy, and this will be the first time the Supreme Court has interpreted its scope. The court’s ruling on this case could significantly influence online platforms’ abilities to protect their users from harmful content online and their legal protections.

As the Supreme Court hears oral arguments in Gonzalez v. Google, organizations from across the political spectrum have attempted to influence the court’s decision by filing amicus curiae briefs including here at BPC. Overall, 78 organizations submitted briefs. This research summarizes the main arguments in support of the petitioners (Gonzalez), the respondent (Google LLC), or neither party. We also share which experts were cited most in support of each side, and what this reveals about the arguments in the case.

In support of Gonzalez

  1. Broad immunity concerns: Many briefs in favor of Gonzalez argue that tech companies should not be immune from liability in every circumstance. For example, some argue that platforms should be held liable when they make editorial judgments to ban opposing political parties’ messages. Many of these briefs explain that tech companies are no longer acting “voluntarily” or “in good faith” to restrict objectionable content as Section 230 requires, and, thus, they are no longer entitled to its liability shield.
  2. Meaningful redress rights: Several of these briefs claim the court’s ruling in favor of Google would undermine the rights of Americans to seek appropriate redress for illicit or harmful acts committed online. They contend that Section 230 prevents citizens from adequately holding companies accountable for injuries caused by information circulated online and from obtaining compensation for these wrongs. Many of these arguments favor narrower interpretations of publisher immunity to grant consumers more opportunities to seek legal recourse.
  3. Child safety: The arguments in support of Gonzalez also prioritize interpretations of Section 230 that are consistent with one of Congress’ original intents to protect children from harmful content online. These briefs support giving child victims and parents an avenue to hold technology companies accountable for harms incurred online.
  4. Algorithm concerns: Additionally, these briefs argue that Section 230 should not immunize internet service providers when they create and profit from recommendation systems that entice users to engage with unlawful, violent, and harmful content. They also argue that the lower court’s expansive interpretation of publisher immunity under Section 230 (i.e., that immunity encompasses online recommendation algorithms) makes it more difficult to hold social media companies accountable for the harm their products inflict on people. Furthermore, the briefs claim the scope of immunity granted for “traditional publications” should not extend to content that is recommended to specific users.

In support of Google

  1. Narrow immunity concerns: Several briefs in support of Google argue that platform choices about how to display, organize, and promote content are functionally inseparable as part of a platform’s unified design. Section 230 protects platforms from liability when they use tools to facilitate online exchanges and block dangerous content. Several briefs argued that Section 230’s authors intended to protect recommendation systems, the backbone of critical services that have become an essential part of daily life, from how we search the internet to how we apply for jobs.
  2. Innovation and competition: Many briefs stated that Congress intended Section 230 to promote competition and innovation in the technology sector, and some amicus briefs explained how the digital economy has flourished as a result. Under a limited Section 230, only large platforms could absorb the risks of content moderation. The briefs argue the Supreme Court should honor Congress’ deregulatory intent that encourages self-regulation.
  3. Congress’ authority to rectify internet policy: Some organizations defended the viewpoint that limiting Section 230 represents a profound policy change that should be left for Congress to determine, not the judiciary. Congress is the better avenue to consider policy changes about how to promote free expression, competition, and user control while reducing online harms.
  4. Free speech and expression: Several briefs also suggested that increasing platform liability for hosting user speech would lead to severe censorship, where platforms remove content that is even mildly controversial. Faced with the dilemma of leaving risk-creating content up or taking it down, platforms will take down large amounts of diverse, valuable speech from the internet.

In support of Neither

  1. Reducing hateful and extremist content: Many briefs argued that as global internet users increase annually, bad actors misuse and exploit the internet more than ever before. Specifically, there are concerns that social media platforms play an increasingly prominent role in the radicalization of extremists, specifically young adults, who may search, consume, and spread harmful content with like-minded individuals online and then incite violent actions offline. Likewise, it is vital that children be protected from cyberbullying, inappropriate content, and invasions of privacy. Therefore, content moderation is social media’s most powerful weapon against the rise of extremism, hate speech, and graphic violence online.
  2. Section 230 has changed from its original meaning and purpose: Several briefs suggest that, back in 1996, Congress enacted Section 230 so platforms could host and moderate user-generated content, but some organizations argue that the interpretations of its text and original legislative intent have changed over time through various court rulings. Now internet service providers have broad immunity from a wide range of claims. Furthermore, there were no algorithms, artificial intelligence, machine learning, or advanced content moderation technologies in 1996; Congress could not have envisioned the internet as we know it today, nor the various positive and negative consequences it would have on our society.
  3. AI-driven recommendation algorithms: In today’s technologically advanced world, algorithmic content technologies play an increasingly prominent role in our everyday lives. Under Section 230 protections, online services have built and deployed sophisticated content moderation systems that filter user-generated data, retain user attention, and detect online abuse. More controversially, algorithms also recommend content to users based on their preferences and search history. Many briefs insist that understanding the mechanics of these content moderation systems is key to understanding this case and the Section 230 debate.

The Most Influential Experts in Gonzalez v. Google LLC

As the Supreme Court reviews these amicus brief filings, it will come across hundreds of citations, including nearly 900 different experts and 600 different cases. While each organization that submitted an amicus brief has varying interpretations and considerations, we can also identify similarities between arguments based on the expertise and resources they highlighted.

A few trends emerge when comparing amicus briefs that supported Google with those that either supported Gonzalez or neither party. For example, briefs supporting Google rely heavily on law professors—Daphne Keller, Eric Goldman, Jack M. Balkin, and Anupam Chander. These briefs tend to advocate for preserving a competitive free online market of ideas and connections without fear of liability.

Briefs supporting Gonzalez or neither party also rely on law professors, such as Adam Candeub and Danielle Keats Citron, who emphasize freedom of expression and minimizing harm in cyberspace, respectively. However, these briefs rely much more on recent investigative journalism that has uncovered how certain algorithms and technologies can distort reality and amplify extremist content. The top cited experts include journalists from The New York Times (Gabriel J.X. Dance, Sheera Frenkel, Michael H. Keller), The Guardian (Paul Lewis), The Wall Street Journal (Jeff Horwitz, Deepa Seetharaman, Georgia Wells), The Washington Post (Craig Timberg), and The Atlantic (Anne Applebaum).


When consuming opinions on Gonzalez v. Google, please pay careful attention to the experts cited by the author and the arguments they gravitate towards. Different amicus briefs defend different arguments using different reasoning: some drawing on the negative impact of platforms’ algorithms on the spread of divisive and hateful content, whereas other briefs defend Section 230’s benefits, from decreased litigation costs for smaller websites to the many features it enables. Taken together, the 78 amicus curiae briefs showcase the broad impact of the Supreme Court’s decision on issues related to free expression, child safety, national security, online innovation, and more. BPC will continue to monitor these proceedings as they unfold in the coming months.

Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now