Skip to main content

AI and Ethics

Introduction

Artificial intelligence is changing our lives on a daily basis. From voice recognition to movie recommendations to robo-advisors, many of these services are all around us. AI has great potential to create new opportunities and greatly improve lives, but many ethical challenges that previously existed—like bias, privacy, and power asymmetries—will evolve and can be greatly exacerbated by the emerging uses of AI technologies. Therefore, issues of civil rights and liberties must be front and center in discussions about the development, deployment, and oversight of AI technologies and systems. Ideally, with the right policies and an inclusive approach, AI technologies would not exacerbate but rather mitigate the existing challenges in protecting civil rights and liberties.

Fortunately, AI technologies are not an autonomous force beyond human control. All stakeholders—developers, users, consumers, and policymakers—should have the power to determine how these technologies evolve, how they are applied, and ultimately how they impact people and societies. These stakeholders should work to ensure public policy, regulation, and governance structures are well-designed to meet the challenge. Many ethical questions need to be answered.

Take the example of an AI system used to help allocate hospital resources. The AI system can be trained to diagnose patients and suggest what resources should be allocated to their treatment. The potential for such a system to greatly improve human health is vast. However, many thorny ethical questions arise. Should the AI system make final decisions about a treatment, or should only humans? How much should the algorithm focus on saving lives versus improving the quality of life for terminally ill patients? What rights should the patient have? What governance structures should the hospital have to oversee the system throughout its life cycle? The AI system can unintentionally become biased because of the data it is trained with or by algorithmic design, so what should be done if the AI system appears to exhibit bias against a protected group that threatens to perpetuate or amplify existing societal inequities?

At its best, the AI system will help improve the allocation of medical resources and can improve health outcomes, lower costs, and ultimately save lives in an inclusive manner. But there are serious risks, such as an AI system picking up and exacerbating human biases, worsening inequities in the health care system, and harming those who are most vulnerable.

Health care is only one area where ethical concerns about AI are being raised. In critical areas ranging from criminal justice to financial services to national defense, people are grappling with their own set of questions. Identifying common themes and differences amongst industries can help guide Congress going forward to ensure a thoughtful and well-tailored approach for promoting AI ethics. The failure of the United States to lead will result in other countries setting global AI ethics standards that might not be aligned with American values. However, public concerns about emerging technologies are not novel to AI. Pharmaceuticals and automobiles are examples of technologies that have benefited society but also raised ethical concerns. In addressing these ethical challenges, neither denialism nor sensationalism was the correct response. The best policies came about when policymakers consulted the relevant stakeholders and experts, then raised public awareness and put in place thoughtful policy solutions that addressed legitimate concerns. This is the public policy approach the United States must take for AI ethics. In this spirit, the Bipartisan Policy Center, in consultation with Reps. Will Hurd (R-TX) and Robin Kelly (D-IL), has worked with government officials, industry representatives, civil society advocates, and academics to better understand the major AI Related ethical challenges the country faces.

This paper hopes to shed more clarity on these challenges and provide actionable policy recommendations, to help guide a U.S. national strategy for AI. BPC’s effort is primarily designed to complement the work done by the Obama and Trump administrations, including President Barack Obama’s 2016 The National Artificial Intelligence Research and Development Strategic Plan, President Donald Trump’s Executive Order 13859, announcing the American AI Initiative, and the OMB’s subsequent Guidance for Regulation of Artificial Intelligence Applications. The effort is also designed to further advance work done by Kelly and Hurd in their 2018 Oversight and Government Reform Committee (Information Technology Subcommittee) white paper Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy. Our goal through this effort is to provide the legislative branch with potential actions it can take to advance AI building on the work being done by the Obama and Trump administrations.

Share
Read Next

Key Principles

Over the past several months, BPC has conducted a series of roundtables and convenings with experts, academia, industry representatives and civil society organizations to examine concerns about AI and fairness, bias, and privacy. Based on these discussions, BPC has identified the following key principles:

  1. The federal government should further fund and encourage research and development projects that address bias, fairness, and privacy issues associated with AI.
  2. The federal government should encourage more diversity in AI talent to help mitigate unfair bias and promote fairness in AI practices.
  3. The federal government should encourage the development of voluntary standards frameworks to help create shared conceptual foundations, terminology, and best practices for fairness and bias based on a cooperative and multi-stakeholder approach.
  4. In promoting ethics and mitigating unintended bias, the regulation of AI should build on existing regulation when possible and be tailored to different use cases using a risk-based approach.
  5. AI and privacy should not be conflated, but AI-specific considerations should inform and influence privacy legislation.

The remainder of this white paper is organized as follows. Section II provides a broad overview of AI and ethics to summarize the common foundations identified by BPC in its discussions with stakeholders from industry, civil society groups, government and academia. Subsequent sections describe each of the five key principles listed above, including a brief overview along with specific recommendations the United States can pursue to accelerate and sustain global leadership in AI while minimizing the likelihood of adverse impacts to civil liberties, civil rights, and innovation.

Downloads and Resources

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags
Share