Skip to main content

Coordinated Influence Operations— Fear, Uncertainty and Doubt

Background on Influence Operations

Most social media users have heard of bots, or automated software programs that mimic human behavior online. Bots have gained notoriety due to how they have been weaponized via automated postings to spread information disorder, harmful content, and general spam. Bots recently made mainstream media headlines due to Elon Musk’s recent acquisition of Twitter. Musk claimed that Twitter was lying about the number of spam-bot accounts on its platform, which is why he was reluctant to follow through on the acquisition. What people may not know is that bots are a part of a larger phenomenon called influence operations.

The Carnegie Endowment for International Peace defines influence operations as “organized attempts to achieve a specific effect among a target audience.” Influence operations leverage social media using a range of strategies such as bots, trolls, spamming, disinformation, fake reviews, advertising, cyberattacks, and a more recent tactic called coordinated inauthentic behavior. Online influence operations are anything but new. Going as far back as 2008, the Chinese government realized that conducting influence operations in the digital sphere was the best way to avoid anti-party sentiment and deter collective action. The Chinese government paid hundreds of thousands of people, dubbed the “50-cent party,” to create accounts and flood Chinese social networks with positive publicity of the CCP’s ideology and agenda, thus spinning the narrative in their direction.

The skillful manipulation of social media soon expanded to radicalized individuals, terrorist groups, businesses, and foreign governments that employ a great deal of resources, teams, and infrastructure to conduct online propaganda. The digitization of modern society combined with the intrinsic anonymity granted by the internet has allowed for the proliferation of online influence operations to become more sophisticated and covert. However, not all online influence operations have malicious intent. Just as how bots are not intrinsically malicious, influence operations can be used for positive advocacy, social movements, and general information sharing – such as those designed to get citizens to be more civically engaged and vote. Considering recent evidence of foreign election inference, weaponizing the internet has become increasingly commonplace in and around election cycles. With the midterm elections here, it is paramount to promote media literacy so that one may be able to recognize disinformation online. As the threat grows more prevalent, malicious actors are developing new techniques to sow fear, uncertainty, and doubt in our society.

Background on Coordinated Influence Operations

Coordinated Inauthentic Behavior (CIB), a term coined by Facebook in 2018, is a more recent type of influence operation that has gained national attention lately. Most recently, Meta shut down Russian-backed coordinated inauthentic behavior that involved a small network of over 60 fake websites masquerading as legitimate news websites. They were posting articles criticizing Ukraine and the sanctions against Russia. It was the largest, most complicated Russian-coordinated disinformation campaign since the beginning of the Russia-Ukraine conflict. This phenomenon is frighteningly relevant – so what is coordinated inauthentic behavior, who is behind it, and why is it so dangerous?

Coordinated inauthentic behavior is when bad actors work together to mislead a target audience while posing as an identity not of their own. Simply put, people are creating fake identities to spread disinformation. An example of CIB could be a foreign organization disguised as hyper-partisan Americans posting divisive political hashtags. In contrast to bots, CIB campaigns:

  1. employ real people across multiple accounts; and
  2. are orchestrated with the intent of misleading a target audience—whether for political, financial, or other reasons.

Malicious actors deploy a variety of complex methods to build influence online. They often create fraudulent personas, accounts, articles, websites, or media sites with plausible profiles. The false accounts attempt to generate engagement by running ads, buying fake followers, commenting on verified pages, resharing each other’s content, or getting a particular hashtag trending. The goal is to amplify the operation’s content and manipulate the information environment. The operators and organizations behind this activity try to conceal their identities and remain anonymous but some previously exposed CIB campaigns have had ties to military groups and government entities.

Inauthentic accounts, especially those originating from foreign sources, are often designed as a form of soft power seeking to polarize society, manipulate public debate, and undermine the integrity of democratic institutions. CIB also has significant security implications and may be used for radicalization, recruitment, and the incitement of offline violence. For instance, the White Helmets were a Nobel Prize-nominated humanitarian organization working in Syria, when they found themselves to be the targets of a Russian disinformation campaign. Alt-right activists, backed by the Russian government, propagated a story that linked the White Helmets to al-Qaida. As the public narrative shifted, the White Helmets were seen as terrorists and consequently came under violent attacks in which hundreds of volunteers were killed. This indicates how influence campaigns can cross the threshold from manipulating peoples’ views and emotions online to real-world physical violence and destruction.

Furthermore, the argument that online extremism can translate to offline violence is a key issue in the recent Supreme Court case Gonzalez v. Google. The Supreme Court announced for the first time that it will rule on the controversial law that has governed the internet since 1996— Section 230 of the Communications Decency Act—which states social media platforms cannot be held legally responsible for the content users post. The question presented is whether Section 230 protects online platforms when their algorithms amplify extremist propaganda, which terrorists use to radicalize new recruits, build influence, and facilitate attacks. The Supreme Court’s ruling in Gonzalez v. Google could have significant implications for content moderation and the future of the internet.

Countering Influence Operations

Tech companies

Emboldened by Section 230, many tech platforms are already taking proactive steps to police their sites of harmful content and malicious behavior. In order to mitigate coordinated influence operations, social media companies have developed increasingly sophisticated content moderation systems to detect and remove inauthentic accounts. Some companies such as Meta, Google, and Twitter have seen significant levels of successful platform interventions. After pioneering the term ‘coordinated inauthentic behavior,’ Meta has consistently released public reports, wherein they document the different CIB networks and pages they removed. Twitter recently joined the fight by banning over 100 accounts promoting Putin at the beginning of the Russia-Ukraine conflict. Google is also taking more aggressive action via YouTube to stop the spread of coordinated deceptive practices involving the war in Ukraine. While mainstream social networks continue to advance their content moderation practices, bad actors are constantly evolving their methods in response to new countermeasures. Unlike bots, effective coordinated influence operations can blur the lines between genuine human conversation and deceptive online behavior, thus making it harder for them to be identified by bot-detecting algorithms. Because real people are creating fake accounts, platforms may struggle to combat CIB without censoring legitimate users. If clear lines are not drawn regarding what is deemed acceptable versus unacceptable, it will be difficult to accurately distinguish malicious CIB from regular online movements.

Civil Society Organizations

While social media’s content moderation capabilities may be the first line of defense against the spread of disinformation, civil society organizations (CSOs) play a key role in the investigation and advocacy of influence operations. There are many nonprofits, non-governmental organizations, and CSOs working to study foreign influence campaigns. These include the RAND Corporation’s Information Operations project, the Atlantic Council’s Digital Forensic Research Lab, The German Marshall Fund of the United States’ Alliance for Securing Democracy initiative, and the Carnegie Endowment for International Peace’s Partnership for Countering Influence Operations. These organizations utilize their analytical capabilities, technical expertise, and global reach to help foster evidence-based policy recommendations. They also can assist with content moderation enforcement, such as fact-checking, media monitoring, and digital literacy training. For example, in 2019 Meta was able to remove a CIB network of more than 100 accounts that attempted to manipulate Moldova’s parliamentary elections using “a tip shared by a local civil society organization.” But while these organizations intend to positively shape policymaking, there are limitations to their research initiatives such as a lack of adequate funding and access to private-sector data.

Academia

Nestled in between the public and private sectors, academic institutions play an important role in helping society understand influence operations and their effects. Many institutions develop interdisciplinary methodologies that bring data scientists, policy analysts, law students, journalists, and many other studies together, for the purpose of looking at these novel problems from a wide variety of angles. Some institutions at the forefront of this effort include the University of Washington’s Center for an Informed Public, Carnegie Mellon University’s Center for Informed Democracy & Social – Cybersecurity, and Stanford Law School’s Internet Observatory. A recent report by Stanford investigated “an interconnected web of accounts on Twitter, Facebook, Instagram, and five other social media platforms that used deceptive tactics to promote pro-Western narratives in the Middle East and Central Asia.” It was the largest covert pro-US influence operation identified to date. One issue that these institutions may run into is the lack of standardization for translating their advanced research into useful policymaking.

Federal Government

Alongside the action from tech companies, CSOs, and universities to combat disinformation, the federal government must ensure it can combat online influence operations. There are currently a variety of entities that deal with influence operations: the Department of Homeland Security’s Countering Foreign Influence Subcommittee, the Cybersecurity & Infrastructure Security Agency’s Mis, Dis, Malinformation team, the Department of State’s Global Engagement Center, and the FBI’s Foreign Influence Task Force. Only a handful of state governments have launched initiatives such as New Jersey’s Disinformation Portal and Arizona’s Task Force on Countering Disinformation. Recently, the Inspector General of the DHS released a report arguing for a unified department-wide strategy to effectively combat disinformation after it terminated its newly created Disinformation Governance Board. Congress has previously introduced bills addressing the spread of disinformation (S.2493 – Combatting Foreign Influence Act of 2019 and H.R.6971 – Educating Against Misinformation and Disinformation Act), but these have thus far failed to pass. As seen with the Section 230 debate, regulating online content has proven to be a difficult balancing act between combatting disinformation and protecting freedom of speech. Legislators from both parties argue it’s time to do something about misinformation online, although a unified strategy has not been established and the debate is split among pro-moderation and anti-censorship factions.

Conclusion

In an era defined by information disorder, it has become incredibly important to stay educated and practice caution when navigating the internet. Coordinated inauthentic behavior, a relatively new phenomenon, is understudied and common definitions are needed. While social media platforms have been at the forefront of defining the terms of the CIB issue by creating de facto content moderation standards, combatting online disinformation will require a wide range of actors. It is important to encourage cross-sector collaboration and information sharing amongst government institutions, the private sector, universities, and civil society organizations to build national resilience to influence operations.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags