Skip to main content

Generative AI and Disinformation

Generative artificial intelligence (AI) has led to breakthroughs in innovation across sectors. This technology is expanding exponential growth in AI capacity and use. This growth, however, can lead to disinformation – information created and disseminated with the intent to mislead. Generative AI may increase the quality, quantity, and targeting capabilities of disinformation which could manipulate the integrity of the information space. This blog will discuss how AI can augment disinformation campaigns and the difficulties policymakers may face in trying to protect against these risks.

How Generative AI Augments Disinformation

Advances in large language models (LLM) and deep learning (DL) are powering generative AI programs to design extraordinary content, such as composing a sonata, writing a short story in the style of Hemingway, or drawing a Picasso-style landscape painting in seconds. This ability to quickly and efficiently mimic human-generated content (HGC) may aid in the success of AI-generated disinformation making it cheaper, faster, and more effective.

Before the widespread rollout of generative AI, threat actors — groups or people who engage in adversarial action — had to employ a network of relatively skilled content creators, outsource message creation to third-party contractors, or distribute low-quality messages. Employing a cohort of skilled writers is expensive, outsourcing may reduce cost but increases operational security risk (like having freelancers talk to the NYT about their experience), and disinformation rife with grammatical errors tends to lack credibility. Generative AI can help bypass these impediments by quickly producing massive amounts of unique and idiomatically correct disinformation that can be extremely convincing.

Beyond the ability to enhance the credibility of disinformation campaigns by including high-quality imagery, cybercriminals are leveraging generative AI to augment social engineering campaigns—cybersecurity attacks that use psychology to manipulate people into sharing sensitive information. An emerging trend is for scammers to use AI to mimic the voice of a loved one in need of immediate help. This simple yet effective ploy for quick cash may give way to more sophisticated and targeted attacks. In our increasingly digitally connected world, would it be terribly out of character for a manager to leave a voicemail asking for a coworker to forward a working document? Social engineering attacks are an effective strategy, even against intelligent company employees, and malicious actors are only increasing the efficiency of these attacks. Furthermore, as the number of people and content online continues to increase, hackers will have access to more data allowing them to impersonate more targets which may increase the vulnerability of critical infrastructure

While students and professionals use generative AI to improve their coding ability, it is also used by cyber criminals to improve malicious code. This ability expands how generative AI can be used– it can author disinformation and write the code that facilitates its distribution, thereby reducing the cost of running an influence operation. This reduction in cost may incentivize existing threat actors to engage in more disinformation campaigns and could even entice new threat actors to run their first influence operations.

Together, generative AI’s ability to create idiomatic narratives, deploy multimedia resources, and write code may decrease the risks and increase the rewards of running influence operations. Armed with these new tools, threat actors may conclude that their campaigns are less likely to be caught by algorithmic content moderation, operations are less costly, and they can execute higher-value attacks. These opportunities, in turn, may increase the number of information operations.

AI Disinformation and Political Literacy

Whether it is debating upcoming legislation or merely connecting with elected representatives, public comments are a crucial part of the democratic process. Even before generative AI became a viable disinformation tool, threat actors fabricated millions of comments in favor of repealing net neutrality. There is a concern that generative AI will exacerbate the problem. A recent study from Cornell showed that legislators responded to AI-generated letters at the same rate as human-generated ones, which shows how easy it is to flood elected officials with deceptive comments. It is a concerning possibility that deserves attention.

The Difficulty in Solving AI Disinformation

One pathway that may provide some protection against AI-generated disinformation campaigns is imposing digital ID verification requirements across social media platforms. Authenticating an individual’s identity could help social media users distinguish real U.S. residents from malicious fake accounts. However, a verification system could stifle free expression and the ability to communicate online while also raising data privacy concerns.

Another difficulty in solving this is the private sector’s mixed incentives. While many technology companies take steps to reduce disinformation through policies, products, and personnel, there is a certain hesitancy to go further.  Technology companies don’t necessarily want to be the ultimate authority for determining what is true or false, which constrains how effective their policies and enforcement actions are against disinformation. Furthermore, as disinformation spreads across the information ecosystem, user engagement with faulty posts can lead to additional ad revenue. This means that while the private sector may take many steps to prevent disinformation, they are battling mixed incentives.

Conclusion

As the new wave of generative AI explodes, these developments may be leveraged to advance cybercriminals’ activities and disinformation campaigns. It has the potential to author believable disinformation at scale, produce complementary multimedia support, and even produce the code needed to distribute said content. This may reduce the cost of running disinformation campaigns, which may incentivize more information operations. To face this nascent threat, it will be important for policymakers and industry to work together to find creative solutions to what will be a serious issue in the near future.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags