Skip to main content

Artificial Intelligence Policy and the European Union

A Look Across the Atlantic

In late 2019, then newly elected European Commission President Ursula von der Leyen announced her intent to make regulating the use of artificial intelligence (AI) a top priority for Europe. She promised to “put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. Following this announcement, the European Union (EU) drove forward legislative efforts to address potential risks and harms with AI systems. However, designing government policy and regulation for AI systems and their use is no small feat; at the end of those 100 days, the European Commission had produced only a white paper exploring policy options. In 2021, the Commission put forward draft legislation around AI that EU bodies and stakeholders actively debate today. Creating AI policy is difficult; it requires thoughtful consideration of the many perspectives involving this complex technology and ongoing conversations to aid future decisions and compromise solutions. This paper will present experts’ and stakeholders’ points of views and considerations for AI policy.

AI has become central to everyday life, transforming society in many ways. It is used in product recommendation systems for online marketplaces, voice recognition technology, and tools to help direct street traffic. Many lifesaving technologies rely on AI, from informing health care decisions to assisting rescue aid efforts. While the application and benefits of AI are numerous, there are also many challenges and risks. For instance, harmful bias in AI systems can perpetuate or exacerbate historical inequities in areas such as employment, health care, finance, and housing, and malicious actors can use AI tools to manipulate people.

The European Commission has made it a priority to create a regulatory framework that would prevent and minimize AI’s negative effects. The United States, by contrast, has focused less on regulation and more on “soft law” approaches, such as guidelines, standards, and frameworks, complemented by tort and other existing laws (such as civil rights laws) to help hold people liable when harm occurs. Experts and policymakers, however, disagree with each other about how to design AI policy and what balance it should strike between different values, such as proactively working to prevent harm through government intervention and taking a more permissive approach to encourage experimentation and innovation.

The Bipartisan Policy Center is committed to finding common-ground solutions to establish trust, reduce harm, and promote innovation in the field of AI. In producing this paper, BPC worked with stakeholders from academia, industry, civil society, and others from the European Union and United States. Our goal was to better understand European regulatory approaches to AI, their influence on U.S. politics and society, and the challenges and opportunities of AI policymaking. Our efforts included in-depth conversations and survey responses by a wide range of stakeholders from diverse backgrounds, summarized in this report on AI policy in the EU.

Read Next
Downloads and Resources

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now