Skip to main content

Defining High-Risk, High-Reward AI

Artificial intelligence (AI) systems and use cases are evolving rapidly, and the proliferation of widely accessible generative AI technologies will accelerate innovation. Generally accessible generative AI systems, which already help conduct research, create marketing and political campaign content, translate content, propose product ideas, and perform many other tasks, are impressive and versatile. We have only scratched the surface of their current potential, let alone their future capabilities. Yet, the wide range of sociotechnical risks that generally accessible generative AI systems may pose to society is still unknown.

Recently, generative AI technologies have taken center stage in media publications and policy forums, but many other AI technologies also have become much more advanced and ubiquitous and have elicited excitement and concern. Several of these concerns stem from uncertainty about the risks that particular AI systems pose across use cases and the lack of consensus around appropriate risk mitigation. To help resolve uncertainty, a mix of legally binding (aka “hard law”) and non-legally binding (aka “soft law”) governance likely will be necessary.

Some AI use cases have implications for physical safety, national and economic security, and access to key resources and opportunities (like health care, financial capital, employment, education, housing, and government services). BPC’s prior work on impact assessments demonstrated that these use cases can pose high risks, but that does not mean that these use cases are more harmful than beneficial. Several high-risk use cases can also yield high rewards.

Consequently, the optimal balance of hard and soft law will differ across AI use cases. Requirements and restrictions should also be use-case-specific. In general, governance frameworks should subject AI use cases that pose high risks to more stringent requirements. Those requirements should not be so stringent that they prohibit high-risk, high-reward AI use cases or stifle research and development initiatives that could produce novel high-reward use cases. However, in the United States, which AI use cases are high-risk, high-reward is an open question.

Building consensus on which AI use cases are high-risk, and which of those are also high-reward, can help U.S. policymakers and other stakeholders develop and implement effective AI governance frameworks.

This piece aims to help advance AI governance policy conversations by providing an overview of:

  1. the current AI governance landscape;
  2. the lack of consensus on which AI use cases are high-risk;
  3. the lack of consensus on which high-risk AI use cases may also be high-reward; and
  4. perspectives on defining high-risk, high-reward AI.

Current Policy Landscape: Unclear Definitions and Governance

It is no secret that the pace of AI (and broader technological) innovation has far exceeded the pace of policy development and implementation. U.S. policymakers broadly recognize that calibrating legislative restrictions and requirements based on the risks that different AI technologies pose in different settings can help mitigate risks of harm without unduly impeding technological innovation and economic efficiency. Nevertheless, AI actors have only limited guidance from U.S. federal government agencies and Congress on which AI use cases are high-risk and high-reward.

Voluntary consensus frameworks, like the National Institute of Standards and Technology’s (NIST’s) Artificial Intelligence Risk Management Framework (AI RMF), can help AI stakeholders self-regulate by mapping, measuring, and managing AI risks and building AI risk management governance programs. However, the AI RMF does not provide clear, detailed guidance on when risks are too high and/or rewards are too low to proceed responsibly with an AI use case. Similarly, the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights and the Government Accountability Office’s non-binding AI accountability framework provide general insights into the risks and benefits that various AI technologies can produce. Nonetheless, these resources do not explicitly define high-risk AI use cases or clearly identify which high-risk AI use cases are also high-reward use cases.

U.S. states have considered, and in some cases enacted, legislation that governs the development and use of AI technologies in particular contexts. Illinois enacted a law regulating the use of AI on job applicant interview video footage. Maryland enacted a similar law that applies exclusively to face recognition technologies. A law in Colorado restricts the use of algorithms and “predictive models” in insurance practices. Several other states have considered AI legislation, and, absent preemptive federal legislation, additional states may enact AI laws.

In addition to navigating the compliance challenges the emerging legal patchwork creates, AI actors must contend with ambiguity in federal laws and regulations. A wide range of general and sectoral federal laws may have implications for AI. Several federal agencies have already started to provide guidance on how existing laws apply to AI use cases, but the extent to which certain existing legal protections apply to AI design, development, deployment, use, and oversight remains unclear. The National AI Initiative Act aims to establish and help coordinate federal government AI initiatives, including programs that consider AI risks and advance trustworthy AI, but it does not define high-risk AI or establish related safeguards.

In contrast, pending legislation in the European Union, the EU AI Act, tries to explicitly define high-risk AI (although the EU AI Act focuses on “AI systems” rather than “AI use cases”). Several stakeholders support the AI Act’s effort to tailor restrictions to risk levels, but opinions diverge on the AI Act’s definition of “artificial intelligence system” and specific risk classification scheme (particularly its classification of “highrisk” AI). As currently drafted, the EU AI Act would designate several AI systems as high-risk and establish conditions for classifying additional AI systems as high-risk. The draft Act would not specify which high-risk AI systems can also be high-reward systems but would ban AI systems that pose an “unacceptable risk.” By placing certain AI systems into the “unacceptable risk” category, the Act implies that these AI systems pose risks that vastly outweigh the rewards. Whether other high-risk AI systems yield rewards significant enough to outweigh the costs of complying with the requirements for high-risk AI seemingly would be up to the market to determine.

Absent U.S. federal government action, portions of the EU AI Act may become the de facto international standard, but Congress seems motivated to avoid this possibility and to demonstrate U.S. leadership in trustworthy AI innovation. Last Congress, Republicans and Democrats introduced legislation that provides some insight into which AI use cases may pose high risks and how best to mitigate those risks. Recent hearings displayed bipartisan interest in continuing AI policy conversations and considering AI governance legislation in the 118th Congress.

Defining High-Risk, High-Reward AI

So far, both legislation and nonlegislative AI governance frameworks seem to suggest that an AI use case’s risk and reward potential depends, at least in part, on the type of AI system and the way users deploy the system.

The data and methods developers and users employ to train and test the AI system, the data the system processes in its deployment environment, the accuracy and reliability of the system in laboratory and deployment environments, and the extent to which the system’s functions and outputs are explainable all may impact the type and level of risks and rewards an AI use case can produce.

Deployment environment characteristics also can impact stakeholders’ assessments of whether an AI use case is high-risk and high-reward. The purpose for deploying the AI system, the ways in which humans oversee and otherwise interact with the system, the number of people interacting with the system, the consequences of a system malfunction or failure in the deployment setting, and the potential negative impacts to various stakeholders even when the system performs properly all may influence the AI use case’s potential risks and rewards.

Determining whether an AI use case is high-risk and high-reward entails going beyond identifying and measuring potential risks and rewards. It also involves evaluating potential risks and rewards. Whether to evaluate risks and rewards in absolute terms and/or in terms of relative improvements over alternative technologies or manual processes is worth considering. Whether to assess the use case’s potential risks and benefits on their own or relative to one another is also up for debate.

Figure 1

AI governance frameworks should balance risks and rewards by establishing requirements that mitigate the risks an AI use case can pose and support the potential benefits it can offer society. A combination of use-case-specific “hard-law” and “soft-law” approaches may help keep high-risk AI out of the hands of negligent or bad actors without significantly obstructing use cases that yield higher rewards.

Conclusion

Federal AI governance frameworks that establish clear definitions and safeguards could help cultivate public trust in high-risk, high-reward AI. Through a mix of legal requirements and soft law guidance, these frameworks should calibrate restrictions and requirements to different AI use cases’ potential risks and rewards. This use-case-specific approach would help ensure that governance regimes promote safe, effective AI adoption in ways that protect civil and human rights, national and economic security, and broader societal well-being.

To facilitate this tailored approach to AI governance, policymakers and other stakeholders must evaluate the risks and benefits that different AI use cases can produce and determine which requirements and restrictions should apply to various use cases.

By establishing a common understanding and focusing stakeholders’ attention on some of the most pressing issues in AI governance, developing multi-stakeholder consensus definitions of high-risk, high-reward AI use cases could help advance AI governance conversations and initiatives.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags