Point-of-view: you are walking downtown. Looking to your left, a group of teenagers are filming a dance challenge video on TikTok. Looking to your right, someone is tweeting their thoughts on the Barbie movie on Twitter. Looking in front of you, your friend is posting a cute picture of their dog on Instagram. In today’s digital age, a startling amount of user-generated content is posted to the internet every minute (as depicted by the chart below). In 2023, more than 4.9 billion people reported using social media as a place to learn, connect, share, and express themselves. Some of our greatest achievements and memories are shared first on these platforms.
However, social media can also spread cyberbullying, information disorder, extremist and illegal content. So how do social media companies filter out potentially dangerous content? That is where artificial intelligence (AI) comes in.
Under Section 230 protections, online services have built and deployed powerful content moderation systems—a combination of human moderators and AI. As AI plays a more prominent role in moderating online speech, it is open to more scrutiny because transparency around how algorithms operate AI-powered social media platforms is not yet broadly accessible. This complex technology can be hard for the average internet user to understand, making accountability difficult to articulate and enforce.
In this post, we will help answer a few questions that some may have about social media algorithms. How do they work? Who designs them? Why do they function in a particular way?
What Are They?
Basically, algorithms are coded instructions that tell a computer system how to operate. Algorithms are important to AI because they make the system smarter and faster by allowing it to learn and improve its performance.
Social media companies employ computer scientists and software engineers to build algorithms into the platform design process. The mathematical formulas are written for a variety of use cases, such as sorting data, automating decision-making, and enforcing a company’s internal community guidelines, terms and conditions, or trust and safety policies.
These AI tools allow digital platforms to process user-generated content at a speed and scale that is not possible using human moderators. Algorithmic content moderation involves advanced data science and machine learning—which trains on large datasets, learns from our online behavior, and becomes autonomous.
By analyzing user behavior (input), platforms can gauge and personalize what content users see in their feeds (output). A person’s individual social media newsfeed is the result of algorithmic ranking, screening, and recommendations. For example, if a user watches a cat video on YouTube, the platform will then serve them similar content (e.g., dog or pet videos) in the future.
The days when posts were shown in reverse chronological order are long gone. Algorithmic recommendations have become increasingly more prominent among large social media platforms since the success of TikTok, which is an almost purely algorithm-based platform.
What Do They Do?
Changing Policy Landscape
Digital platforms are constantly evolving and maturing their algorithms to provide users with a better web experience. As this technology grows increasingly capable and complex, policymakers debate what role regulation should play in maintaining a safe digital environment and ensuring social media companies are incentivized to do so.
For example, there is bipartisan concern about the potential negative impacts of algorithms, including AI bias, youth mental health, misinformation, censorship, and data privacy. The 118th Congress has introduced numerous bills addressing the use of algorithms, such as
- DATA Act,
- Platform Accountability and Transparency Act,
- Protecting Kids on Social Media Act,
- Kids PRIVACY Act, and
- Safe Social Media Act.
Amid heightened scrutiny, policymakers are aiming to increase transparency and explainability about algorithms, their role in content moderation decisions, and the consequences they have on society, most importantly children.
While the benefits of algorithmic content moderation are immense, its limitations and challenges must be addressed in a way that protects the American people without infringing free speech and unintentionally making users worse off. For example, given the intersection of tech policy areas, regulation of algorithmic content moderation could have several tradeoffs for artificial intelligence and user data privacy.
It is vital that Congress have a comprehensive and accurate understanding of how algorithms operate because for better or for worse, they are here to stay.
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.Give Now