The pandemic has revealed the centrality of technology to the economy, business, and society at large. The incoming Biden administration will have to confront growing pressure on both sides of the aisle to implement new tech policies to meet the challenges and opportunities present in an increasingly digital, interconnected world.
Over the past year, the Bipartisan Policy Center has worked with policymakers, industry leaders, and civil society to develop a set of recommendations and principles to create a national strategy for artificial intelligence. Specifically, the issue of AI ethics is one topic that the new Congress and administration should pay close attention to.
Here are three broad ethical questions that should be explored:
- How can we reduce harmful bias? Just as humans have biases, AI algorithms can show bias due to the data they use and the algorithmic design, which can disadvantage many marginalized groups. Unfair bias can further disadvantage these groups and lead to unfair outcomes, such as the misidentification of a suspect by a facial recognition software.
- How do we define and promote fairness? AI algorithms have no inherent sense of what is fair. For instance, an AI system cannot by itself determine how to allocate resources when conflicting definitions of fairness exist, such as how an autonomous vehicle should balance the safety of a driver with the safety of the passenger in hazardous situations.
- How can we protect privacy? AI technology also amplifies existing privacy concerns, such as rules around data collection, and raises new ones, such as whether and when AI systems can be used to monitor employees.
These questions will not have simple answers and will be context specific. However, BPC has put forth several ideas and recommendations to guide AI ethics going forward.
1. Fund and support research and development projects that address bias, fairness and privacy issues associated with AI.
The government should support R&D for technical and non-technical research to address ethical challenges presented by AI. Technical research can help address the challenges of improving AI system design. For instance, research may find better ways to design an AI system to reduce the likelihood that private data is unintentionally shared or find techniques to help make AI decision-making more transparent for end-users.
However, technological solutions will not be enough. Many AI ethics questions involve deep philosophical questions about what is fair and just that can only be answered by humans. Funding non-technical and multidisciplinary research, such as studies by ethicists, economists, and sociologists, can provide necessary insights and inform guidelines about where the use of AI technology is appropriate and the societal implications it might have. This research can also be used for designing ethics classes for STEM and computer science disciplines.
2. Encourage diversity in AI talent to mitigate harmful bias and promote fairness in AI practices.
A diverse workforce can help better reflect society’s values in AI systems. First, a more diverse group of people can better catch poor assumptions or biased datasets in designing AI systems. For example, an algorithm designed to detect harmful speech on social media could miss terms due to a lack of cultural understanding or poor translations. In this case, a diverse and multi-lingual workforce will be better at identifying a range of phrases that people might use, and therefore are likely to make the technology more accurate.
Second, a diverse AI workforce that more holistically reflects society’s makeup and ethical values can help guide more inclusive discussions about its fair use. Broader questions, such as whether and how stringently to monitor speech on social media, would be better addressed by a workforce that represents society’s myriad views.
Federal agencies and businesses should look for ways to encourage diversity and retain diverse talent across all levels of an organization. Policymakers should consider a review of the talent pipeline from early education throughout a person’s career. An evidence-based approach should be used to determine where programs would best address AI talent diversity challenges to ensure their effectiveness.
3. Develop a voluntary AI standards framework to encourage shared conceptual foundations, terminology, and best practices.
Shared frameworks and terminology are a key component of developing standards for ethical AI. Consistent standards can help build trust in AI systems, establish quality assurance, incentivize market competition, and ensure AI innovations meet inclusive and transparent benchmarks. A common lexicon can enable clearer communication and better discussions about the technology between policymakers, academics, industry and civil society organizations.
Constructive standards should be developed with input from diverse, multi-disciplinary stakeholders, as well as having public engagement on ethical discussions. This will both foster an understanding of the societal impacts of AI and build consensus about how to incorporate human values in AI design. The National Institute of Standards and Technology’s voluntary privacy framework can provide a suitable model for developing AI standards. Voluntary standards may not be sufficient on their own, but they can help inform any decision regarding regulation.
4. Build AI regulation on existing rules where possible and tailor and modernize to different use-cases with a risk-based approach.
Regulation has been an important policy tool to address issues of civil rights and civil liberties. However, the debates around how to modernize the regulatory structure for the AI-driven economy is in its early stages. As such, a review of the existing body of relevant regulation should be undertaken to identify how and where laws can be applicable to AI. This can help identify gaps and ambiguities in existing laws,and determine how to address them.
Where appropriate, existing regulation should be tailored and modernized using a risk-based approach to address AI-specific concerns. A risk-based approach evaluates the potential impacts or consequences of different AI use cases, to guide what types of regulation are necessary. For example, autonomous weapons systems should be subject to more stringent controls than a chatbot designed to streamline online help requests.
Key to ensuring AI practices uphold ethical values is supporting regulatory agencies to have the right tools, talent, and resources to enforce legitimate AI concerns. Congress can play an oversight role in the agencies’ implementation and enforcement of federal regulations, to verify that regulatory obligations are being met in their jurisdiction.
5. Ensure AI-specific considerations inform and influence privacy legislation.
Although AI and privacy should not be conflated, ensuring privacy is protected is a significant part of developing ethical AI. As the value of data increases for tech companies, the incentives to collect more personal data to improve their AI systems can amplify existing privacy considerations. In addition, certain AI tools can raise new privacy concerns, such as the debate over whether facial recognition technology used for surveillance is a breach of civil liberties. A national privacy framework is critical to building trust with the US public and establishing domestic and international privacy standards. As such, AI-specific considerations need to be included in legislative discussions.
The new Biden administration and Congress should be concerned with finding the best ways to promote fairness, mitigate harmful biases, and protect privacy with AI. Working together, a robust tech policy agenda should include advancing R&D, diversifying the workforce, encouraging voluntary standards, promoting smart regulation, and protecting privacy. In doing so, the US can be a global leader in AI innovation and ethics.