Skip to main content

Transparency in Health Care AI: A Conversation with Experts

The future of artificial intelligence in health care must be grounded in transparency and trust, experts from across the health care sector and the political spectrum emphasized in an April meeting conducted by the Bipartisan Policy Center, in partnership with the Commonwealth Fund. Key takeaways addressed reimagining regulation, establishing transparency and trust, navigating liability, and understanding the current landscape.

AI transparency involves providing insights into the development, training, operation, and deployment of an AI system. In the context of health care, transparency becomes crucial as it enables patients and providers to gain a deeper understanding of the data sets utilized by AI algorithms, how algorithms are created, and how they interpret data.

Reimagining Regulation
The conversation centered on how traditional regulatory frameworks are struggling to keep pace with the ever-evolving nature of AI. In contrast to static products such as pills or devices, AI exhibits dynamic evolution, learning, and adaptability. The transition from machine learning and AI, upon which current FDA approvals are based, to generative AI introduces challenges for regulation. Our current regulatory processes resemble navigating with a paper map in a world equipped with GPS technology—outdated and inadequate for the dynamic nature of AI. Updating our regulatory infrastructure requires agencies to adopt a new approach to AI regulation. Attendees proposed approaches such as reoccurring approval points or professional licensing exams (mirroring the ongoing standards expected of human professionals). Additionally, participants discussed the notion of third-party audits as a possible pathway toward comprehensive oversight. Others highlighted the need for clear definitions on key terms including risk, bias, and harm in the efforts to guide effective regulation. Participants also observed that the Supreme Court is currently considering a decision on the use of Chevron deference in regulatory review. Currently, the Supreme Court has ruled that the interpretation of ambiguous statutes is entrusted to administrative agencies. Should this ruling be overturned or modified, it would have broader implications for AI health care regulatory efforts.

Establishing Transparency and Trust
Participants engaged in a vibrant discussion on transparency and trust in AI algorithms. Some underscored the significance of comprehending how algorithms make decisions so that users can better understand their capabilities and limitations. Others argued that outcomes and performance play a more significant role in gaining public and professional confidence in AI. Simplifying AI to provide explainability may compromise its overall capability. Several participants discussed the importance of building trust within historically marginalized communities, noting they may harbor more mistrust toward AI systems. They emphasized that when making decisions about AI deployment, it’s vital to engage in tailored communication with these communities. That could include involving marginalized communities in the decision-making process and being transparent about how an algorithm formulates a decision.

Navigating Liability
The conversation delved into the complex realm of liability and accountability associated with AI deployment in health care settings. As AI algorithms could potentially guide diagnostic and treatment decisions, the allocation of responsibility is up for consideration. Participants grappled with pertinent questions: Who bears liability when AI falters in diagnosis or treatment? Is it the clinician, the developer, or the implementing facility? Everyone concluded that legal frameworks need to adapt to the nuances of AI.

Understanding the Current Landscape
Several participants noted that while conversations about future clinical use of AI are important, we also need to focus on the current applications of AI in health care. Many health insurance companies and health systems are already incorporating AI into their back-end operations. This can mean staffing, asset management, using AI to optimize schedules and supply chain issues. AI is also being used in claims review, using natural language processing (NLP) to process those claims. Traditionally a time-consuming process, using AI holds promise to expedite claims review. While AI holds potential, much of the discussion focused on its use in claims reviews, leading to unclear denials of benefits and lack of clear oversight.

Since the gathering, there has been some congressional action on AI with the release of a bipartisan “Roadmap for Artificial Intelligence Policy in the U.S. Senate.” The HHS Office of Civil Rights also released a rule on nondiscrimination which included a section on AI use in clinical settings. The rule stated that providers and medical professionals who use AI must take appropriate action to identify and mitigate discrimination in AI when they use clinical algorithms or predictive analytics.

As AI further entrenches itself in the health care system, policymakers must continue to discuss these challenging issues, from regulatory recalibration to fostering trust and transparency. Collective action will be required to navigate the journey ahead.

Share
Read Next

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags