Policymakers are increasingly paying attention to and advancing policy on the development and use of artificial intelligence. The technology’s many applications have the potential to greatly benefit society and the economy, but also could pose a potential risk to privacy. Further, while the technologies could do much to reduce bias, if not executed properly, they could exacerbate existing discriminatory practices.
AI technologies are not entirely independent actors beyond our control. We can build the tools and shape the policy that will guide their development. The new administration and Congress should take a bipartisan approach to assessing these technologies and if appropriate, place guardrails around AI that reduce its risks but allow it to fulfill its potential.
Ever since the term “artificial intelligence” was coined in 1956, there has been no consensus over a formal definition. While the debate over what constitutes AI has continued to evolve, the Fiscal Year 2019 National Defense Authorization Act stipulated that the meaning of AI can be inclusive of the following:
- Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
- An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
- An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
- A set of techniques, including machine learning, that is designed to approximate a cognitive task.
- An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.
AI has become a salient policy issue partly because its applications have the potential to cut across every area of our lives—such as healthcare, education, finance, and criminal justice. Some applications are more innocuous, like algorithms that use data from your past views or purchases as a basis to make movie recommendations, or online chat assistants that learn from previous help requests to navigate a user through a webpage. However, many of AI’s use cases, such as algorithms used to allocate medical resources or autonomous weapons systems, are highly consequential for society, the economy, and national security.
To seize the opportunities presented by beneficial uses of AI, such as tracking the spread of the pandemic or personalizing online education, policymakers need to be mindful of the risks, like privacy violations presented by facial recognition or biased data worsening racial disparities in criminal justice. As such, there are several relevant AI-related policy issues that require consideration.
AI’s potential to vastly impact quality of life raises important ethical questions, such as how to address bias, promote fairness, and protect privacy. Lawmakers need to consider what a fair use of AI is and how human values of fairness can be embedded into AI systems. Encouraging the development of standards framework, common terminology, and best practices through a multi-disciplinary approach will be important to mitigating against harmful biases that could emerge from AI uses. The government should review and tailor regulation to ensure ethical principles are upheld in algorithmic design and use. While the discussion around privacy legislation extends beyond AI, AI does pose novel privacy concerns that need to be addressed to protect civil liberties and build trust in AI systems.
The government has a responsibility to help prepare the workforce of the future, as well as to ensure the AI-driven economy is inclusive and prioritizes equal opportunity. Targeted policy approaches can help fill the AI talent gap, such as supporting efforts to train, recruit, and retrain workers. The federal government can also help to identify and focus on supporting the jobs and skills of the future that will be needed to complement AI technology. Modernizing the education system and encouraging lifelong learning will also be critical to readying the workforce of the future.
The U.S. government has historically played a significant role in funding basic research that has led to the invention of transformative technologies—such as the internet, the GPS, and the transistor. However, federal government spending on R&D as a percentage of GDP has declined in recent years, which is worrisome relative to other global player’s increase in R&D spend. R&D funding will be important to stimulating breakthroughs and innovations in AI technologies.
The notion that data is the new oil of the digital economy speaks to the reliance of quality data needed for modern AI systems to be effective. The U.S. can build off the OPEN Data Government Act to develop and release publicly available datasets, with appropriate safeguards, to promote innovation. Data is also critical in competing with the Chinese government, who can collect vast troves of data from its large population and surveillance efforts and use it to accelerate its AI developments. As such, the U.S. can invest in few-shot machine learning techniques that are less reliant on data to compete with authoritarian governments that have more robust data collection capabilities and do not concern themselves with democratic values.
Where possible, any regulation of AI should use a risk-based approach to evaluate how to most effectively apply federal laws and regulations to AI uses. Federal agencies should determine what, if any, are the risks posed by AI uses in their domain and seek to apply or modernize regulation accordingly. This approach is consistent with the 2019 guidance from the Office of Management and Budget, which aims to ensure regulatory measures do not impede U.S. innovation in AI.
Maintaining leadership in AI is of great strategic importance for U.S. national and international security. In addition to AI’s myriad applications for defense and intelligence, the U.S. must compete with nations like China and Russia who are also investing in military applications of AI for strategic advantage. The U.S. should focus on developing AI tools for military functions that identify best practices for human-machine collaboration and teaming, and uphold the DOD’s commitment to ethical principles of AI. The U.S. government should work pragmatically with its global allies and adversaries in multilateral forums to set standards around AI use and development, and mitigate against the risk of accidents or unintended escalation.
The true potential of AI can be hard to discern, as there is a general lack of understanding over what the technology is, and sensationalized Hollywood depictions of intelligent robots have caused AI’s capabilities to often be overhyped. As such, the government should play a role in raising public awareness around AI and its related issues. Many of the aforementioned policy areas—such as training the workforce, embedding human values around ethics and fairness in algorithmic design, and cooperating in multilateral forums to develop international standards—will be important steps in building trust around AI systems.
In recent years, the federal government has made several important policy advances that can be built on by the incoming administration. Efforts from the executive branch have included President Barack Obama’s 2016 The National Artificial Intelligence Research and Development Strategic Plan, President Donald Trump’s 2019 Executive Order 13859, announcing the American AI Initiative, and the Office of Management and Budget’s subsequent Guidance for Regulation of Artificial Intelligence Applications.
In Congress, Reps. Will Hurd (R-TX) and Robin Kelly’s (D-IL) multi-year effort to pursue AI legislation culminated in the passage of a bipartisan national strategy for artificial intelligence (H.Res.1250) in December last year. The bipartisan AI policy approach was further bolstered by the FY2020 NDAA, which dedicated 63 pages to the National Artificial Intelligence Act of 2020, providing the funds and remit for a whole-of-government approach to maintaining leadership in trustworthy AI.
Put simply, artificial intelligence is at an inflection point. The federal government must strike a balance between limiting the potential risks while embracing the technologies’ endless potential.
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.Give Now