Skip to main content

Key Takeaways: AI Executive Order and Bipartisan Momentum

The Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is a significant step toward implementing a whole of government approach for establishing comprehensive standards and guidelines for the safe and secure use of AI. The EO leverages voluntary initiatives, including the National Institute of Standards and Technology (NIST) AI Risk Management Framework to the White House Office of Science and Technology Policy’s (OSTP) Blueprint for an AI Bill of Rights, backed by industry commitments to improve AI development practices.

This approach aligns with BPC’s recommendations in our AI National Strategy for Congress. The strategy, which culminated in H.Res. 1250, called for a comprehensive governance framework to ensure the responsible use of AI for the future and accelerate the United States’ position as a global technology leader.

The EO represents a pivotal moment in the responsible AI landscape, and closely monitoring the actions that follow to achieve the administration’s broad goals is critical. In this blog, we discuss the key takeaways, identify outstanding gaps and AI governance challenges, and inform actions for the future.

10 Key Takeaways:

1) AI Safety and Security

The order requires coordinated development of standards and regulations, compelling companies to report their procedures in developing, training, and testing of some of the largest AI models that could pose a security risk to the U.S., in accordance with the Defense Production Act. Companies that meet the technical conditions must report red-team safety testing results and security measures to the government.

2) Civil Rights and Equity

The order encourages greater research and standards to address the potential for AI to lead to discrimination or bias in the federal government benefits administration, criminal justice system, or making housing or hiring decisions. In addition, the EO tasks the Department of Justice and federal civil rights offices with addressing algorithmic discrimination by collaborating with stakeholders and providing training, guidance, and best practices to state, local, Tribal, and territorial investigators and prosecutors.

3) Protections for Individuals

The order directs the Department of Labor to publish guidelines for best practices for employers to limit AI impacts on health and safety, workplace equity, harmful data collection practices, and employees’ ability to organize. It also signals a reorientation of existing job training and education programs to support a diverse workforce and build resilience for the impacts of AI.

4) Hire, Recruit, and Train AI Talent in the Federal Government

Supporting recent efforts to bolster federal government hiring, the order outlines a plan to advance existing federal technology talent programs to recruit and retain AI talent to keep pace with the rapid evolution of AI technologies. The plan includes reviewing current hiring practices, establishing new guidance for pay flexibility or incentive pay programs, and strengthening existing federal tech talent programs like the U.S. Digital Service and Presidential Innovation Fellowship.

5) AI Talent and Immigration Pathways

The order leverages existing authorities to modernize and streamline pathways for highly skilled immigrants and nonimmigrants to study and work in the U.S. It also seeks to foster diverse expertise in STEM-related fields. The order directs the Department of Homeland Security to review and initiate any policy changes necessary to support increasing access for these individuals.

6) Public-Private Innovation

By fostering collaboration between public and private sectors through a National AI Research Resource and other investments, the order aims to accelerate U.S.-bases AI research to inform risk mitigation strategies and enhance the advantages of AI.

7) Responsible AI Innovation

In the absence of comprehensive federal consumer data privacy laws, the order initiates actions to mitigate privacy risks in government software and federal procurement processes, including careful review of personally identifiable information obtained from data brokers. It also advances agencies’ research development and implementation of privacy-enhancing technology and privacy impact assessments.

8) Cybersecurity of Critical Infrastructure

The order prioritizes securing critical infrastructure, requiring the Departments of Energy and Homeland Security to develop AI evaluation tools and testbeds to fully consider and understand potential AI impacts. The order requires agencies to collaborate with the private sector and academic community to develop the tools and should consider potential relations to biological, chemical, critical infrastructure, and energy-security threats.

9) American AI Leadership Abroad

The order directs the State Department and Department of Commerce to collaborate with international stakeholders and allies to coordinate on AI risk management and development frameworks. These frameworks should align partners with the U.S.’s goals for risk management and accountability.

10) Generative AI

Addressing concerns around disinformation and transparency with generative AI, the order directs the Department of Commerce to develop guidance for watermarking and content marking to make it easier for Americans to recognize AI-created content. It also tasks several agencies, including NIST, with developing companion resources to existing frameworks (like the Secure Software Development Framework) to ensure the practices are replicated for generative AI models.

Ensuring Longevity and Bipartisanship in AI Governance:

Executive Orders in the United States exert authority over federal agencies, guiding their operations and policy implementation. However, EOs are not etched in stone; they can be revised or revoked by future Presidents and nullified by Congress, highlighting their flexibility and impermanent nature. Previous orders, including one initiated during the Trump administration, EO 13859, aimed to set federal AI approaches. EOs also face significant implementation challenges, with less than 40% of the mandated actions verifiably implemented. Implementation of the 111-page EO will require significant time and resources from government and external stakeholders.

While this EO puts into motion many positive actions, it will not be enough to safeguard national security, promote responsible and transparent use of AI in the public or private sector, nor protect consumers’ privacy, particularly children. Carefully crafted, bipartisan efforts in Congress will be critical to ensuring important issues at the intersection of data privacy and AI governance are addressed. Several bipartisan led efforts demonstrate willingness to work across the aisle to create durable policy for responsible AI development and use. Further progress made by bipartisan cooperation among Senators Schumer (D-NY), Rounds (R-SD), Young (R-IN), and Heinrich (D-NM) seeking input from a variety of stakeholder perspectives, is a testament to the potential for strong bipartisan collaboration on broad AI policy.

Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now