Skip to main content

Tech Regulation Through Transparency

A policy shift is happening in the United States’ tech regulatory debate around how we communicate online and who controls what is seen. While discussions around free expression, censorship, and content moderation remain strong – many are also realizing that any regulation on those issues is likely to fail on First Amendment grounds.

That is why there is a shift in regulatory proposals to require social media companies to increase transparency around their inner workings. Legislators are hoping that requiring more transparency from platforms will help the public better understand how content – especially problematic content like mis and disinformation – gains traction, how the algorithms decide what content to show or recommend to people, as well as data about how people engage with different kinds of content. As this national debate continues, states are moving forward with their own.

Republicans feel this will help them identify any coordinated bias against their content, while Democrats want more transparency to understand how much mis and disinformation is spreading. But what does it mean for tech to be more transparent, and what kind of transparency will give us the answers that we think we need?

Today, many tech companies already have robust transparency sites that outline their community standards, their enforcement of those standards, the government requests they get for data, and many other things. Here are links to reports from Meta, Twitter, Google/YouTube, Twitch, and TikTok. Some platforms like Meta and Google also offer transparency into the political and issue ads running on their sites. Meta also allows some insight into all ads running at any given time.

Another way companies attempt transparency is by giving a glimpse into the content on their platforms. Twitter has long had the most robust pipeline through its API. Meta has tried to do this through their Crowdtangle tool and a project known as Social Science One. However, Meta is now restricting access to these tools, which have been plagued by delays and bugs.

Unfortunately, these efforts have fallen short of what researchers, journalists, and others want from the platforms to understand what is happening on them. They do not trust the companies to present the complete picture of what is happening, and they want the ability to confirm on their own. They also want to know more about the metrics companies prioritize and how they design their ranking algorithms. However, asking for these figures versus getting something that will give regulators and the public the insights they want are two very different things. It will be very challenging for companies to give a complete picture of why certain pieces of content are shown to an individual or not for a few reasons. First, hundreds – if not thousands – of inputs are used to determine what a person might want to see. This can make it hard to determine exactly which ones were prioritized for that piece of content. Second, there are multiple algorithms and other classifiers in play at any point in time. It can be near impossible to pinpoint how they are working together. Finally, given the nature of algorithms, at some point, it is doing the calculation and prediction of which content should be shown and it doesn’t usually show its work as to why it made that call.

Moreover, transparency is tricky for companies to do on their own. It is not in their best interest, as the minute they do so, people jump on them for all the negative things happening on the platform. This is where regulation comes in. As Brandon Silverman – the founder of Crowdtangle – said in a recent Senate Judiciary hearing, “The reality is that the single biggest challenge to transparency is that platforms can get away with doing nothing at all, and there are no real consequences.”

To solve this problem, we have seen policymakers such as Senators Coon, Portman, and Klobuchar introduce the Platform Accountability and Transparency Act. Under this legislation, companies would be required “to provide vetted, independent researchers and the public with access to certain platform data.”

For algorithmic transparency, Senator Markey and Representative Matsui have introduced the Algorithmic Justice and Online Platform Transparency Act. This would establish a safety and effectiveness standard for algorithms. Similar legislation includes Senator Lujan and Representatives Malinowski and Eshoo’s Protecting Americans from Dangerous Algorithms Act (PADAA) and the Justice Against Malicious Algorithms Act which “would lift the Section 230 liability shield when an online platform knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury.”

Even people who have worked inside platforms like Facebook push for regulation to require more transparency. In addition to Silverman, the Facebook whistleblower Frances Haugen has called for more transparency, as have members of the Integrity Institute – a new nonprofit created by people who worked on integrity/trust and safety teams at various tech platforms.

Requiring transparency does not come without its flaws. Eric Goldman at Santa Clara University has written about “the underappreciated Constitutional problems that arise when regulators compel Internet services to disclose information about their editorial operations and decisions.” He says that regulators requiring transparency will motivate companies to change their practices to please regulators, which is an indirect way of the government trying to regulate the speech of those companies.

Moreover, these transparency efforts also bring up questions of protecting the privacy of users, who gets access to the data, what problematic data companies might be compelled to keep, and how this data could be used for government surveillance. In addition, this legislation runs the risk of imposing the same rules on big and small platforms alike, which might not be appropriate given the resources and structure of those platforms. Daphne Keller from Stanford covered a lot of this in her recent testimony as well.

Transparency legislation holds a lot of promise to help society better understand what is happening with the platforms. However, we must ensure that we think through the unintended consequences that transparency poses and build guardrails from the beginning.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags