Internationally, content moderation issues are termed intermediary liability regiems. There are several other countries that maintain intermediary liability (IML) regimes that vary in scope and severity in their definitions of the relationship online platforms have with the users that publish content on their sites. Freedom House reports that at least 48 countries have passed or are considering laws for tech companies around content, data, and competition. With a few exceptions, many of these laws are actually veiled attempts to suppress speech. Overall, these IML regimes tend not to treat the intermediary as a speaker or publisher and instead build frameworks requiring them to remove illegal content when notified. They also follow a trend of governments increasingly pressing intermediaries to block undesirable online content in order to suppress hate speech, privacy violations, terrorism, misinformation, and in some cases government criticism. It is less common for countries to explicitly address the monitoring, filtering, or removal practices of intermediaries regarding non-illegal content as the United States does. In several instances, countries without IML regimes now maintain them as stipulated by free trade agreements (FTAs) with the United States, in a sense allowing the US to export at least part of its IML model to its allies.
Australia – The 1992 Broadcasting Services Act, predating Section 230, protects internet hosts or ISPs from civil or criminal liability for hosting or transmitting content when unaware of its illegal nature. They are also protected from being compelled to monitor, make inquiries, or keep records of content hosted or transmitted. After an Australian white supremacist perpetrated the 2019 Christchurch mass shooting in New Zealand, the government soon after passed the Criminal Code Amendment (sharing of abhorrent violent material) Bill, making it illegal for social media platforms to fail to expeditiously remove abhorrent violent user material, as defined by the bill, shared on their services.
Canada and Mexico – Article 19.17 of the US-Mexico-Canada Agreement (USMCA) stipulates that “no Party shall adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability” along with other language that closely mirrors Section 230.
India – Section 79 of the Information Technology Act creates a comparatively narrow IML regime for online intermediaries to remove content they might deem harmful but not necessarily unlawful. Online service providers are shielded from liability for the content published by third parties on their platform provided they do not initiate the transmission, select the receiver, select or modify the information contained, and observe due diligence and other guidelines proscribed by the government. The law does establish liability if the intermediary had “actual knowledge” of or did not act expeditiously to remove content related to an illegal act. The Indian Ministry of Electronics and IT (MeitY) updated these rules in February 2021 to introduce a tiered approach that requires intermediaries with over 5 million users to incorporate a company locally, assist the government in tracing unlawful content within 72 hours, trace the original content creator, and deploy AI tools to conduct content moderation.
Indonesia – Ministerial Regulation 5 (MR5) came into law in November 2020 and requires all private digital services and platforms to register with the Ministry of Communication and Information Technology and agree to provide access to their systems and data as specified in the regulation. Companies are required to make sure they don’t distribute “prohibited content.” Human Rights Watch says the law violates human rights standards.
Japan – Similarly, the 2019 US-Japan Trade Agreement also contains language creating a Section 230-style liability shield. On top of this, Japan’s Provider Liability Limitation Act does not require telecommunications service providers (TSPs) to actively monitor online content but holds them liable if they intentionally ignore consumer complaints and fail to act in response to them. The Japanese court system, too, has generally not held online service providers liable for the content of their users.
Latin America – Most countries in Latin America do not currently employ comprehensive IML regimes and often defer intermediary issues to various authorities such as consumer protection laws, criminal laws, and data privacy laws. Brazil stands as a partial exemption to this trend as article 19 of the Marco Civil Da Internet specifies that application providers are only liable for third-party content if they fail to make it unavailable after a court order directs them to do so. Legislative proposals have been floated on the subject of IML in the past, such as competing bills in Argentina that would have either shielded or subjected Internet platforms to liability for third-party content, but no such bills have been passed into law by the end of 2021.
Mongolia – Mongolia’s General Regulatory Conditions and Requirements of the Digital Content Service includes several unique characteristics and provides perspective on how new and developing democracies may approach intermediary liability. The legislation prohibits discrimination between internet service providers and online content providers as well as online content providers and content creators. At the same time, the law prohibits certain types of content related to erotica, drugs and alcohol, and subjects that could harm “national solidarity” or children’s safety. Online platforms are also required to install word-filter software provided by the government and to publicly display the IP addresses of anyone who posts on their sites. In terms of liability, online platforms are obliged to monitor content on their sites and to take down illegal content within 24 hours or face penalties.
New Zealand – Section 24 of the Harmful Digital Communications Act 2015 creates a mechanism for users to send a “notice of complaint” to a website requesting the takedown of harmful third-party content. The law defines harmful content as anything illegal or legal content that violates the “communication principles” outlined in the 2015 act. These include any content that is threatening, grossly offensive, obscene, harassing, discriminatory, or a breach of confidence or that discloses sensitive personal information, makes a false allegation, incites individuals to send harmful messages, or incites an individual to commit suicide. Lastly, it requires online service providers to take down reported harmful content within 48 hours; compliance with these provisions otherwise shields websites from any civil or criminal procedures brought against them.
South Africa – South Africa’s IML regime is included in its 2002 Electronic Communications and Transactions Act. Similar to New Zealand’s approach, the law outlines the requirements of a “notification of unlawful activity” and shields websites from liability for removing certain content it may deem to be potentially harmful. It also borrows concepts from the ECD, such as language on mere conduit, caching, and hosting, and extends the liability shield to online service providers who fit within those molds.
Thailand – Thai authorities have used existing authorization under the Computer Crimes Act (CCA) of 2007 to take action against online intermediaries like Facebook to remove content that violated the lèse majesté law prohibiting any and all criticism of the Thai royal family. In light of the threat of a ban, Facebook complied with the request to remove the prohibited content. The CCA was revised in 2016 to allow service providers to be exempted from any penalty if they can prove they complied with government takedown requests.
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.Donate Now