Skip to main content

Generative AI Is a Multi-tool, Not a Magic Wand for Election Administration

Stakeholders across the government, private, and nonprofit sectors spent much of last year preparing for AI-accelerated challenges to election administration. Their concerns were sparked by the rapid emergence of widely accessible generative artificial intelligence (generative AI, or GAI) tools, which can create text and synthetic images, audio, and video.

Less understood is how GAI could improve election administration. Just 7% of election administrators reported using AI in their work, according to a May 2024 survey by the Brennan Center for Justice. Although other government workers and many commercial workplaces are exploring how to integrate GAI into their jobs, many election administrators are reluctant to do so because elections are high-stakes exercises. At a precarious time for public trust in election integrity, the margin for error is razor thin. Simultaneously, tight timelines, small staffs, and limited resources afford administrators little time to innovate and to experiment with new technologies.

On August 29, 2024, the Bipartisan Policy Center held an exercise for election officials to more closely consider election-related uses of GAI. Building on previous suggestions—including those from The Elections Group and International IDEA—this event convened election officials, technologists, and researchers to identify cumbersome, challenging, or time-consuming tasks routinely completed by election offices. (As these tasks often keep officials from other work, we refer to them as “bottlenecks.”) Participants broke into small groups to discuss whether and how GAI might “unstop” these bottlenecks.

Our four main takeaways are:

  • Election administrators can learn from other governments’ AI experiments. These include Pennsylvania’s pilot program that trained state employees to use off-the-shelf GAI tools like ChatGPT for simple yet time-consuming tasks, such as creating rough drafts of documents or producing charts to visualize data.
  • Most common bottlenecks are solvable without GAI. Election administration is typically underresourced. Investment in pre-existing solutions—such as systems for automatically scanning and uploading written forms into database software—would likely be simpler and more reliable than adopting GAI for its own sake.
  • GAI can be used as an entryway to other applications, allowing users who might not have the time or know-how to use tools that were previously beyond their reach. This might be as simple as reworking spreadsheets or as complex as selecting polling locations or drawing precinct lines with GIS software.
  • Because of the legal and ethical risks inherent to election work, offices experimenting with GAI should be trained on best practices in risk mitigation and take a cautious, thoughtful approach to implementation that balances security and privacy considerations.

Wide but Shallow Uses Are the Easiest Place to Start

The discussion surfaced many plausible ways of applying off-the-shelf, general-purpose GAI tools for “wide but shallow” use cases—tasks that touch a broad range of employees in their day-to-day work but require minimal technical expertise and that focus on straightforward, repetitive activities. In these use cases, general-purpose GAI tools can improve efficiency and accessibility across diverse workflows without requiring “deep and narrow” advanced AI customization or technical integration.

Other government offices are already implementing GAI in this way. For example, Pennsylvania launched a pilot program in 2024 through which employees could access ChatGPT for use in their work and receive training on how to use it safely and responsibly.

No election officials are currently participating in the Pennsylvania pilot, but participants in this discussion came up with similar potential applications for election administration. Some of these ideas drew on GAI’s ability to summarize information—for example, as a starting point for finding academic or government resources on a topic. Although a layer of human review remains essential, large language model (LLM)-based chatbots can quickly draft grant proposals or reports, procurement documentation, and other paperwork. They could also transform data from audits or other processes into graphs and visuals for public presentations with minimal time or training.

Other ideas leveraged GAI’s ability to analyze data. For example, it could examine records of congestion at polling places across early voting periods and Election Day to better allocate poll workers in future elections. This kind of analysis could also facilitate the use of AI for scheduling workers, something other industries are already experimenting with. In addition, GAI tools could help officials analyze underutilized data sources—such as timestamps from voting machines—to find patterns suggestive of rare-but-not-impossible errors, such as multiple votes being cast in a matter of seconds. Although spreadsheet management software like Microsoft Excel already makes such analysis possible, an LLM with a chatbot interface could allow users less familiar with those tools to conduct similar tasks, and would enable more users to complete them more quickly.

These data analysis use cases are an example of what some are calling a new paradigm in user-interface design: Instead of issuing step-by-step commands to a computer, officials can simply tell GAI applications their desired outcome. The ideas above are simple, early examples; in the future, officials could potentially use GAI as a “wrapper” to manipulate bespoke software to solve more complex problems such as re-precincting, a process with competing legal and normative requirements.

Most Common Bottlenecks Don’t Need GAI Solutions

Many of the bottlenecks identified by participants have existing software-based solutions that do not rely on GAI but might be out of administrators’ reach. Chronic underinvestment in election-related staffing and infrastructure often means that adopting cutting-edge technology is not top of mind for administrators, and the simplest solutions to their problems are often budgetary- or personnel-based rather than technological. In many cases involving common bottlenecks, it is more practical to provide administrators with the resources to acquire pre-existing tools and train staff how to use them rather than building a custom GAI tool.

List maintenance is one example. Administrators spend significant time maintaining voter rolls: registering new voters, updating addresses, removing voters who move out of a jurisdiction or die, removing duplicate records, correcting errors, and making other necessary changes. They sometimes receive data from departments of motor vehicles and other state agencies, but this data can contain mistakes. Participants proposed using optical character recognition to make handwritten forms machine-readable so that common changes and updates could be automated—a good idea, and one that does not require GAI.

GAI might have potential solutions for some bottlenecks, but it is not yet reliable enough for practical use. Consider ballot proofing, the onerous process by which officials confirm, line by line, that the language on each ballot is precise and correct as prescribed by law. Proofing requires meticulous attention to detail: a missing period or comma in a candidate’s name is enough to throw an election result into the court system. As one participant said, ballot proofing “is the one thing that has to be absolutely,100% perfect.” But GAI is not absolutely, 100% perfect; officials who experimented with GAI for ballot proofing described the results as “accurate, but not very complete.”

Participants also proposed using AI for “time and motion tracking” to collect data on how long it takes election workers to complete each step of a physical task, such as opening envelopes and counting ballots. If this data could be captured and analyzed, it could help election officials assign their workers to the tasks that they are most efficient at, identify the least-efficient parts of their processes, and estimate how costly changes to that process might be. These types of studies are already common in manufacturing and could be replicated for election administration.

Move Thoughtfully, Especially when Pursuing Specialized, “Deep and Narrow” Applications

While our August event proposed several ways of using GAI for election administration, few of them were revolutionary. The technology is a time-saver and an efficiency-booster, but it is not magic or a substitute for human labor, thought, and oversight. Many inefficiencies in election administration can be solved using older, more reliable technology.

The best use cases for GAI in election administration tend to be general-purpose applications, but even these simple functions require training to minimize risk and maintain ethical guardrails. This training should include material on AI’s risks—it is especially important to train staff to be aware of the potential for AI to “hallucinate” (outputting false or misleading information). They must know that at the end of the day, human staff are responsible for quality control around AI’s usage. It is also important to train workers on what kinds of data can be provided to which systems. ChatGPT, for example, collects user prompts for training data, so personally identifiable information should not be used. An internal GAI tool, however, might not have these restrictions.

Even more so than for general-purpose, “wide and shallow” applications, more specialized “deep and narrow” uses for GAI require careful implementation. One such application that surfaced in the discussion was the use of AI to summarize, sort, and deliver training materials in response to user inquiries. Imagine, for example, an interactive handbook for poll workers or new full-time employees: Instead of having to hunt for guidance on specific processes or questions, users could query an AI agent. In such cases, the risk of AI hallucination can be controlled with techniques like retrieval generated augmentation (RAG), which uses AI chatbots to retrieve information from a curated, preselected set of sources. RAG should not substitute for in-person training, but it could prove to be a valuable resource for workers who need a quick refresher or have specific questions.

RAG could also power chatbots that answer common questions about elections and voting for the public, such as registration deadlines, polling locations, and voting procedures. The most cautious approach for a use case like this would be to have a chatbot direct users to web pages containing prewritten answers to common questions. A hallucination from an official government chatbot would be troubling and potentially dangerous; RAG can help minimize this risk.

It has become common to warn that AI systems must have a “human in the loop,” but who that human is—and what they value—is as important as the tools they use. Ethical matters like these require more than discussion with vendors or technologists. The social sciences and humanities also have a valuable role to play in shaping GAI applications and measuring their effects, and it is critically important to collect and consider public feedback. No matter the use case, government officials who are exploring GAI should think proactively about how it could affect the rights of the public—they must account for biases, both human and machine.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags