Skip to main content

Online content regulation beyond the big platforms

To date, most of the debate about regulating tech has focused on the big platforms – Meta/Facebook, Google, Amazon, and to some extent Twitter. However, increasingly we’re seeing the emergence of smaller platforms including TikTok, Twitch, NextDoor, Telegram, Signal, and many others. Moreover, recent tragedies such as the Buffalo shooting have shown how newer apps such as Yubo or Discord are having a great impact on newer generations.

The continued evolution of the internet, including changes in how people are using social media and the emergence of new apps, is proving to be a challenge not just for regulators trying to write rules in this space, but for the platforms themselves who may not have the experience or resources necessary to police problematic content. Let’s take a look at the challenges for each – starting with the platforms.

Start-ups by definition tend to be lean. When Instagram was acquired by Facebook back in 2012, it had 13 employees and 25 million users. WhatsApp had 55 employees for 420 million users. Today Yubo has 113 employees and Discord has 1872. These numbers are a far cry from the tens of thousands of just content moderators that the big platforms employ, let alone the number of engineers, data scientists, researchers, and policy experts that work every day to keep their platforms safe.

While this can be good for the bottom line, it becomes challenging when bad actors shift to using your platform specifically because of its lack of ability to police content. Startup or smaller platforms likely won’t have very robust content moderation policies, nor will they have built the tools to proactively find problematic content. They won’t have the experience of dealing with requests from governments and law enforcement. In some cases, governments may not know how to contact these platforms. Recently in Brazil, Brazil’s Supreme Court blocked Telegram for neglecting the country’s repeated contact attempts. This could have led to the permanent shut down of the Telegram app in Brazil. Additionally, these social media platforms won’t have the resources to produce robust transparency reports, have ad libraries of political and issue ads, or provide data to researchers.

While bigger technology companies have the capacity to act quicker, sometimes it’s not enough. For example, Twitch removed the Buffalo shooter’s live video in less than two minutes and with only 22 viewers, but that’s all it took for this content to then live on and be spread across many other platforms big and small, despite attempts to conceal the media.

This then proves to be a challenge for policymakers trying to regulate in this space. How do you find the right balance between making sure online platforms act responsibly, but don’t put so much burden on them that you inadvertently stifle the growth and innovation of these apps – some of which could become major competitors to the big players?

In Europe, the answer has been the Code of Practice on Disinformation which is an agreement that platforms and others voluntarily sign on to measure their improvement. The final update will be released on June 16, 2022 and is expected to give smaller platforms the ability to not sign onto commitments they can’t fulfill. Bigger platforms will not be given such a pass.

In the United States, legislators have tried to draw the line at how many monthly active users a platform might have, its market cap, or if it accepts advertising. None of these are sufficient for ensuring that smaller platforms also build safety measures as they grow.

The ever-changing nature of the internet is a challenge for policymakers to regulate. Moreover, as people move more towards sharing content via messaging apps – some of those are encrypted end-to-end; which means the platforms can’t read the content – or in the growing metaverse, current regulations fail to give sufficient guidance on how to moderate content and what platforms should be required to do.

Continued work will be needed between platforms, policymakers, and others to determine the right balance for protecting people online but not stifling innovation.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Donate Now