Skip to main content

Insights on AI in Health Care: BPC Responds with Opportunities and Challenges

May 6, 2024

The Honorable Ami Bera
United States House of Representatives
Washington, DC 20515

RE: The State of Artificial Intelligence in Health Care RFI

Dear Representative Bera,

The Bipartisan Policy Center appreciates your interest in the critical intersection of artificial intelligence and health care. As evidenced below, this field holds immense potential for transformative breakthroughs facilitated by AI. Exciting developments await in improving patient care, cost optimization, and alleviating clinician burnout, among many other benefits. However, navigating this path requires vigilance regarding potential pitfalls in AI implementation. It is imperative that lawmakers be cognizant of these challenges as you consider legislative action.

BPC is a nonprofit organization founded in 2007 to combine the best of ideas from both parties to promote health, security, and opportunity for all Americans. Through our recommendations,

BPC’s Health Program strives to develop bipartisan policies across a variety of health issues that improve the nation’s health outcomes, reduce rising health care costs, improve equity in health services, and make quality health care available, affordable, and accessible for all.

In 2019, BPC developed a National AI Strategy for Congress in collaboration with former Reps. Will Hurd (R-TX) and Robin Kelly (D-IL). Through this initiative, BPC held a series of roundtables with government officials, industry representatives, civil society advocates, and academics. Subsequently, we produced four whitepapers addressing key aspects of AI, including its impact on the workforce, national security, U.S. leadership in research and development, and ethical considerations. These whitepapers offered several recommendations that were incorporated into H.Res.1250.

BPC remains committed to informing Congress, evaluating policy proposals, and fostering dialogue with stakeholders on AI. Our latest initiative, AI 101,” aims to equip policymakers and their staff with foundational knowledge of AI, empowering them to make well-informed decisions regarding AI implementation.

BPC’s research on digital health technologies informs our comments on AI in health care, particularly in shaping a regulatory framework for the next generation of AI-enabled devices. Over the past year, BPC crafted evidence-based federal policy recommendations for the effective utilization of remote patient monitoring (RPM) technology. BPC assessed patients’ access to and use of RPM technologies, as well as RPM’s impact on health outcomes, quality of care, and costs. Through a series of interviews and a private roundtable with health policy experts, federal officials, technology leaders, providers, payers, consumers, and academics, BPC sought insights into the opportunities and challenges associated with remote patient monitoring.

Additionally, we recently published two pieces on AI in health care. The first looked at the regulatory environment in which health AI is currently governed and the second examined the details of the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) final rule.

BPC appreciates the opportunity to comment. For further information or to connect with BPC, please contact Katie Adams ([email protected]).

Sincerely,

Marilyn Serafini
Executive Director of the Health Program

Tom Romanoff
Director of the Technology Project

Julia Harris
Director of the Digital Health Program

Katie Adams
Senior Policy Analyst, Health Program

Sabine Neschke
Policy Analyst, Technology Project

Share
Read Next

Implementation

Question #1: How extensively is AI currently being implemented in health care institutions and other settings across the country? 

Generative Artificial Intelligence (GAI) is rapidly transforming the health care landscape, with implementation occurring across various facets of the industry. Though forms of AI have been used in health care for decades, recent advancements in the technology, increasing availability of data and computing power, and public trust and regulatory standards have accelerated its adoption and impact. This is leading to technological progress on all fronts, at a pace that has never been achieved. From administrative support to clinical decision-making, AI is altering health care delivery in novel and groundbreaking ways.

Current state of AI implementation

A recent survey by The Center for Connected Medicine highlights AI as a focal point for health care executives nationwide, ranking as the most exciting emerging technology.1 The expectation is that AI will result in improved diagnostic accuracy, faster treatment delivery, and improved patient experiences. Notably, even the World Health Organization has deployed S.A.R.A.H. (Smart AI Resource Assistant for Health), a generative AI powered chatbot.2 Furthermore, a survey of hospital executives indicates that nearly half of all hospitals currently leverage AI to address workforce challenges (e. g. capacity management with algorithms to predict workflows, asset management, and patient demand to help determine staffing and asset utilization), with this number expected to rise in the future.3,4

Many health care providers integrate AI as an additional tool in their practice. A poll published by the American Medical Association reveals that about 38% of physicians currently use AI tools. The most prevalent use (14%) is for generating discharge instructions, care plans, and progress notes.5 As of the poll, only 11% are regularly employing AI-assistive diagnosis tools.

The rapid advancements and adoption of AI in health care have outpaced the implementation of adequate oversight and governance policies within the health care system. The Center for Connected Medicine survey also found that “only five of 31 respondents (16%) said their organizations had a system-wide governance policy specifically intended to address AI usage and data access.”6 A recent study of health system executives by Bain & Company underscores this, with only 6% of those surveyed having an established generative AI strategy.7 This underscores the urgent need for comprehensive governance frameworks to ensure responsible and ethical deployment of AI in health care.

Federal Outlay

The Food and Drug Administration (FDA) has approved over 500 AI enabled medical devices; however, a study examining their utilization and billing suggests that only a handful have achieved substantial market adoption.8

The Department of Health and Human Services established the Office of the Chief Artificial

Intelligence Officer (OCAIO) and published an Artificial Intelligence Strategy and Trustworthy AI (TAI) Playbook in 2021 and maintains an inventory of AI use cases. In response to the administration’s Executive Order on AI has developed an AI Task Force and National Strategy which are yet to be made publicly available.

More about the different applications of AI can be found in Question #3.

Question #2: What areas of health care are benefiting the most from AI integration, and what are the primary challenges hindering further adoption? 

AI integration in health care has notably enhanced medical imaging interpretation, drug discovery, personalized medicine, health care operations, and remote monitoring. However, further adoption faces key challenges. These include ensuring data quality and privacy, achieving interoperability among disparate systems, navigating regulatory hurdles, addressing ethical and legal considerations, the lack of clarity on liability, and preparing health care professionals for AI adoption. 

Data Quality and Privacy

The integration of AI into health care heightens concerns regarding data privacy and security. AI algorithms require access to large volumes of data for training and validation, raising questions about safeguarding patient privacy while providing access to necessary data. Moreover, there are concerns about the potential for bias or discrimination in AI algorithms, particularly when trained on data that may reflect existing health care disparities. For example, if a population is historically underrepresented in health care claims data, then they are not going to be appropriately accounted for in an AI algorithm that trains on said data. Another concern is how data utilized to train an AI for billing purposes could begin to normalize an institution’s particular billing practices. Therefore, if there is a practice of upcoding in an institution (unnecessarily billing for higher acuity services), there is a danger that baseline data could become baked into the AI to “upcode” at scale.

For AI to effectively serve diverse populations, it must be trained on high-quality data that represents the full breadth of human diversity. This necessitates proactive efforts to include historically marginalized groups in the training data. However, these are also the very groups who may harbor more mistrust toward AI systems. For instance, a 2018 article highlighted the issue where machine learning in dermatology, primarily trained on images of lighter skin, resulted in misdiagnosing skin cancer in individuals with darker skin tones. This emphasizes the critical need for diverse and inclusive data collection practices to ensure that AI systems accurately serve all demographics.9

BPC has consistently prioritized patient privacy regarding medical information. In 2013, we conducted a summit focusing on the utilization of big data in health care. Additionally, we published a white paper in 2012 emphasizing the significance of information technology in health care. BPC’s broader work on data privacy includes analyses of federal and state-level legislation, such as comprehensive and youth data privacy bills, and the privacy implications of specific emerging technologies, like smart homes and face recognition technologies.

The United States lacks a comprehensive privacy law. The primary health-related privacy statute, the Health Insurance Portability and Accountability Act (HIPAA), regulates the use of protected patient health information when held by certain covered entities, including health care providers, health insurers, and the business associates of those individuals or organizations.10 HIPAA may not cover various types of actors and data, such as certain health-related apps and consumer devices.11 Developers of AI tools may rely on these third-party apps for data to train their algorithms.

First Into the Breach,” a BPC explainer, examines the recently issued rule designed to tackle transparency concerns regarding algorithm data and development. The rule is narrow in scope and includes a voluntary certification that verifies health IT products, including electronic health records, which adhere to standards for data exchange, privacy, and security.12

What hasn’t been answered and needs more exploration is the push and pull between an AI system being transparent and explainable but less powerful or a system being more capable but less explainable. Do providers need to be able to trust the AI understanding the data used and how the outcome was determined or is a favorable outcome sufficient proof of AI’s utility?13

Bias

Bias in AI algorithms is a serious risk that continues to be a concern about the technology’s use in clinical decision-making. AI systems can exacerbate existing human biases or suffer from an inequitable representation of data in training. For example, inconsistencies in the data fed into risk adjustment models may skew risk scores (e.g., one health care provider might be more rigorous in documenting patient diagnoses than another, leading to seemingly “sicker” patients, when the data merely reflects more thorough documentation). Additionally, AI algorithms, reliant on vast amounts of data for training, can inadvertently exacerbate biases present in the data.14 A study published in 2019 found that Black patients were identified as having similar risk as white patients but were actually sicker than the white patients.15 The algorithm, however, assigned the risk based on health costs, and Black patients historically have less money spent on them even if they have the same need for care. This has serious implications as payers and providers are increasingly using population-level data to help them identify individuals for things like complex care management programs, home-based outreach, clinical interventions, and other targeted services.

In the absence of federal legislation, federal agencies, and private-sector entities have taken steps to develop best practices for mitigating AI risks and promoting trustworthy AI innovation. For example, the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF) provides guidance on mapping, measuring, managing, and governing sociotechnical risks—including safety and bias risks—throughout the AI lifecycle.

The AI RMF builds on NIST’s prior work on AI bias and explains how AI impact assessments, along with other policies, procedures, processes, and tools (including AI audits), can bolster diversity and combat bias.16

As part of the Biden administration’s executive order on AI, a number of health care companies have voluntarily pledged to uphold the “FAVES” principles: fair, appropriate, valid, effective, and safe use of AI, aimed at addressing these concerns.17 The Office of Civil Rights (OCR) also addresses bias with its final rule on nondiscrimination under Section 1557 of the Affordable Care Act (ACA) (more on this in the liability section).18

Interoperability and AI Literacy

Efforts to improve interoperability with AI are underway, but substantial challenges persist. The complexity stems from the array of existing health IT systems and achieving integration with AI poses financial and technical intricacies. This is especially true for facilities and regions with limited resources.19

BPC has advocated for comprehensive training programs to empower workers with skills to effectively utilize AI tools.20 However, this raises additional concerns, costs, and logistics for health care institutions. Moreover, concerted efforts are needed to equalize access, such as the program initiated by the Duke Institute for Health Innovation, which established a network of access sites to facilitate access to AI tools in lower-resourced areas.21

Educating providers and health care personnel to proficiently utilize AI tools is crucial not only for enhancing efficiency but also for fostering trust in the technology. Despite the potential benefits of AI in streamlining processes and improving patient outcomes, concerns persist regarding its impact on job displacement and patient safety. For instance, there have been protests by nurses in several states in April 2024 around the implementation of AI in their hospital system.22 While AI can automate certain tasks and augment decision-making processes, the human element remains irreplaceable in health care. Maintaining a balance between leveraging AI’s capabilities and preserving the personalized care and expertise provided by human health care professionals is essential. This concept is commonly referred to as “keeping a human in the loop” in AI implementation and development. However, standardized definitions and guidelines on how this will be concretely implemented have yet to be established.23

Liability Concerns

There is limited guidance regarding both legal and ethical frameworks on who bears responsibility when AI produces incorrect clinical diagnoses or makes erroneous clinical recommendations resulting in patient harm. Among existing guidance is the new Federation of State Medical Boards report that says doctors who choose to use AI in clinical decision support are responsible for “accept[ing] responsibility for responding appropriately to the AI’s recommendations.”24

Determining accountability becomes increasingly complex when multiple parties are involved in the deployment and development of an AI program. Moreover, the contours of liability policy can significantly impact clinical decisionmaking.25 For example, the risk of penalties may dictate whether a provider trusts their own clinical judgement or defers to an algorithm. Current legal frameworks may not be adequately equipped to address the burgeoning issues of liability and AI. A Stanford study found minimal case law in this area and concluded that tort law has yet to evolve to match the challenges posed by AI and liability issues.26

The HHS Office of Civil Rights (OCR) finalized their rule addressing AI liability as part of its nondiscrimination rule under Section 1557 of the Affordable Care Act (ACA).27 The aim of the rule is to prevent discrimination when using “patient support decision tools” including those that use AI. This includes patient screening, risk assessment, diagnostic tools, and health care management. The rule says that providers need to make efforts to know what’s in the tools they’re using and if they might contribute to discrimination. This puts the onus of AI-related actions on health care providers rather than AI developers. Thus, providers must possess comprehensive knowledge of the tools they use. As NIST states in the AI RMF, “all parties and AI actors should manage risk in the AI systems they develop, deploy, or use as standalone or integrated components.”28

Payment and Reimbursement Questions

Payment for AI remains an unanswered question, both in its implementation within health systems and how reimbursement occurs for its use. Some AI applications, like IDx-DR (a diabetic retinopathy diagnostic), have received traditional Centers of Medicare & Medicaid Services (CMS) coverage codes. However, questions linger regarding whether the traditional fee-for-service model will be suitable for AI technology in health care.29 Given the unique nature of AI-driven health care, there is a need to explore alternative rewards. Value based reimbursement, which aligns payments with outcomes rather than service volume, could serve as an alternate model for AI services. Clarifying payment mechanisms will be crucial for ensuring sustainable and equitable implementation.

Regulatory

In contrast to static products such as pills or devices, AI exhibits dynamic evolution, learning, and adaptability. The transition from Machine Learning AI (ML/AI), upon which current FDA approvals are based, to Generative AI introduces complexities in creation from data, posing challenges for regulation.30 These challenges call for innovative regulatory approaches that recognize the fluid nature of technological intelligence. Proposed solutions include periodic approval checkpoints and professional licensing exams akin to those for human professionals or third-party audits like the assurance lab model proposed by the Coalition for Health AI (CHAI).31,32 While larger institutions may have the capacity to navigate complex regulatory landscapes, smaller organizations may face significant challenges in meeting regulatory requirements.

This highlights the need for tailored support and guidance to ensure that all organizations can effectively navigate the regulatory landscape and uphold the highest standards of safety and ethical practice.

Question #3: What are the various applications of AI in clinical or operational contexts? 

AI has a wide range of applications in clinical and operational contexts.

Medical Imaging

AI assists radiologists in interpreting images from X-rays, MRIs, and CT scans to detect abnormalities. In 2018, the FDA approved an AI device called IDx-DR for diagnosing diabetic retinopathy. Since then, a significant number of patients have received faster and more accurate diagnoses compared to the previous system that relied on human reviews and required significantly more time. According to a Stanford study, individuals diagnosed by the AI were also more likely to follow up on their screening. A Johns Hopkins study shows that in pediatric populations, AI was able to screen 95% of patients who needed the diabetic retinopathy screening, compared to 49% before its use.33,34 The FDA has approved the most AI applications for medical imaging, with radiologists utilizing it for expedited image processing and enhanced accuracy in disease diagnosis.35,36 With earlier disease detection, patients experience better outcomes. Additionally, AI can provide personalized treatment plans for each individual, improving their chance of successful disease management.37

Predictive Analytics 

AI algorithms can analyze patient data to predict the likelihood of certain medical events, such as readmissions, complications, or disease progression. These predictions allow health care providers to intervene early and deliver proactive care. At Duke Hospital Systems they implemented an augmented intelligence program called “ Sepsis Watch [which] culls data such as vital signs, test results, comorbidities, demographics, and medical history from patients’ EHRs every five minutes” to help predict sepsis in the emergency department.38 At the Mayo Clinic, AI is used to detect left ventricular dysfunction, enabling doctors to predict dysfunction with 93% certainty.39 AI is also improving patient outcomes in cancer detection and diagnosis.40

Drug Discovery and Development

AI-powered algorithms can accelerate drug discovery by analyzing vast amounts of biomedical data to identify potential drug candidates, predict their efficacy, and optimize their molecular structures. This tool is helpful in drug development because an AI algorithm can swiftly sort through millions of pieces of data to determine if a new drug compound could be effective in treating a disease through “evidence of targetdisease associations.Approximately 97% of cancer drugs fail in the initial testing stage, a challenge that AI and machine learning could detect faster. This could save both time and costs associated with conducting initial trails on drugs that may ultimately fail.

While AI may not be flawless in identifying all successful drugs, it could aid in immediately identifying unsuccessful molecules. This is just one example of how AI can be utilized in drug development. AI has also been successful in drug reprofiling, finding new therapeutic uses for existing drugs.41

Remote Monitoring

AI also helps providers to manage large streams of data from remote patient monitoring devices – where the volume of patient-generated information can become unmanageable. Specifically, AI can help to flag values that are out of range and identify patterns in data, thus facilitating early intervention and personalizing patient care. Researchers note a variety of current use-cases, including diabetes monitoring and detecting deviations in movement patterns (such as falls).42

Operations/Management

While media often highlights AI’s potential in predictive analytics, clinical decision support systems (CDSSs), disease detection, and drug discovery, its most significant integration to date is in nonclinical settings for administrative functions.43 AI has been incorporated into health care administrative functions, promising to optimize hospital and health system operations with machine learning algorithms and natural language processing. Take for instance the idea of using natural language processing and applying that to claim analysis.44 The automation of administrative processes, such as billing, scheduling, and claims review, holds the potential to reduce costs and errors. AI algorithms can analyze medical billing data identifying patterns and trends, this can either detect fraud or flag potential errors. Other AI systems can quickly analyze physician availability and facility resources to fine-tune scheduling.

However, concerns have arisen regarding the use of predictive algorithms for things like medical claims review. An investigation by STAT News “found that, for all of AI’s power to crunch data, insurers with huge financial interests are leveraging it to help make life-altering decisions with little independent oversight.”45 One such AI prediction model used in claims review, “nH Predict,” has received several complaints alleging it was used to deny health coverage.46, 47 Lawsuits filed against companies using this model saying the company, “used nHPredict to reject claims, despite knowing that roughly 90% of the tools denials on coverage were faulty, overriding determinations by patient physicians that the expenses were medically necessary.”48

Another nonclinical application involves using predictive algorithms for health care supply chain management to forecast demand. Additionally, AI has also been used in fraud detection and prevention to identify such activities as they occur.49

Physician Support

AI is increasingly leveraged to enhance practitioners’ efficiency in completing charting duties, with the aim of reducing physician burnout.50 The Permanente Medical Group (TPMG) introduced ambient AI scribes, which are utilized by thousands of physicians across various medical specialties and locations. However, this technology needs refinement, as clinicians often find themselves correcting AI-generated notes and providing substantial clinical oversight.51 As referenced in Question #1, the AMA poll of physicians indicates that AI is perceived as a tool to supplement not supplant care.52 Providers may also use a chatbot to ask questions and search for information more quickly than with a regular search engine. While using generative AI to aid in patient care holds promise for clinicians, a Lancet article highlights the ongoing challenges, including AI making unexpected clinical changes and medical personnel having to meticulously check the work, undermining its potential to reduce burnout. Another study in the New England Journal of Medicine examines the current limitations of large language models in medical coding.53 Having provider buy-in and input to AI and changes to the technology they use is critical to implementation.

Question #4: How does AI distinguish itself from other health care technologies? How does AI support existing health care technologies? 

AI’s benefits lie in its speed in combing through data and detecting patterns. It stands out from other traditional methods of data analysis due to its ability to process data quickly and efficiently at high volume, enabling health care professionals to gain insights in real time. AI can also enhance existing health technologies, such as remote patient monitoring (RPM)/wearables and electronic health records (EHR), bolstering the capabilities of these existing platforms, identifying trends in the data, and allowing for early detection of abnormalities that inform health decisions. AI’s interoperability has significantly increased in the past year, with EPIC, one of the largest health information systems providers, integrating its EHRs with AI, ambient AI, and medical coding.54 This trend is expected to continue in the future.

The FDA has cleared hundreds of devices that use AI/machine learning for use, but as of October, 2023, none of them to date incorporate large language models or use generative AI.55 The FDA does not directly approve algorithms but rather evaluates the devices that incorporate them for specific uses. The majority (79%) of these devices are in radiology, as discussed in question #3, highlighting the significant role of AI in health care today.56 While machine learning uses algorithms to parse data and recognize patterns, it is task-based and trained on large datasets. In contrast, generative AI can create original content inspired by existing data. While algorithms reliant on machine learning may be evaluated based on accuracy, performance, and potential for bias, algorithms using generative AI raise unique challenges and concerns around the authenticity or ethics of the content it generates. Sometimes generative AI will even “hallucinate” and make up content. According to a company tracking AI hallucinations, “chatbots invent information at least 3 percent of the time — and as high as 27 percent.”57 Consequently, regulating generative AI poses a multifaceted regulatory challenge.

AI differs from the computer-aided diagnosis (CAD) systems of the late 20th century. While radiologists found questionable benefits in CAD, this current generation of AI offers greater processing power and adaptability, and therefore significantly more capability.58 AI algorithms can analyze diverse datasets encompassing various clinical domains, which allows AI to learn from complex patterns in data, resulting in more robust and versatile diagnostic and predictive capabilities.

Ethical and Regulatory Considerations

Question # 12: With the increasing reliance on AI in health care decision-making, what ethical and regulatory considerations need to be addressed to ensure patient safety, privacy, and equity? 

Potential models, such as the A.C.C.E.S.S. AI Model, have been proposed to address ethical concerns in AI use, prioritizing patient safety, privacy, and equity. Models like A.C.C.E.S.S. emphasize engaging historically marginalized communities from the outset, including them in the design and consideration of models rather than as an afterthought. Given the sensitivity of health care, it requires an approach from this perspective. Other models and frameworks include the American Medical Association and the National Academy of Medicine AI Code of Conduct.59,60

In our work on AI policy, BPC has embraced a multistakeholder approach to crafting bipartisan recommendations on AI and ethics, as well as AI impact assessments, among other topics. In our explainer, Defining HighRisk, HighReward AI, we underscored the importance of working with stakeholders across the political spectrum as the United States develops and strengthens AI governance frameworks. We stated that through “a mix of legal requirements and soft law guidance, these frameworks should calibrate restrictions and requirements to different AI use cases’ potential risks and rewards.” We emphasized that restrictions and requirements “should not be so stringent that they prohibit high-risk, high-reward AI use cases or stifle research and development initiatives that could produce novel high-reward use cases.” Effectively tailoring restrictions and requirements based on risk levels and adopting a use-case-specific approach “would help ensure that governance regimes promote safe, effective AI adoption in ways that protect civil and human rights, national and economic security, and broader societal well-being.”

Question # 13: How can the use of AI in health care provide benefits while safeguarding patient privacy in clinical settings? 

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence mandates measures to address privacy concerns in health care AI. The Department of Health and Human Services (HHS) is responsible for developing an AI assurance policy to evaluate AI-enabled health care tools and ensure developer compliance with federal privacy laws. Additionally, several organizations engaged in health care AI have voluntarily pledged to establish fair and equitable AI frameworks accompanied by ethical oversight.61

It is vital to adopt a comprehensive approach involving government agencies such as the FDA, CMS, Federal Trade Commission, and Office of the National Coordinator for Health Information Technology, along with industry stakeholders, to determine safe and appropriate applications of AI algorithms in health care.

Other Considerations

Question # 15: What emerging trends do you foresee in the intersection of AI and health care?

The current prominence of AI in health care signals not just a trend but also a glimpse into the future. Its long-term implications could revolutionize health care, paving the way for increasingly personalized medicine and health assistance.62 Moreover, AI holds promise for enhancing drug manufacturing through predictive models that theorize and optimize molecule design, potentially accelerating drug development timelines and improving overall efficacy and safety profiles. Already, AI is contributing to advancements in prosthetics, enabling wearers to experience near-lifelike sensations and control.63 Additionally, AI-driven augmented reality simulations are emerging to train physicians, offering realistic and immersive learning experiences. However, the rapid adoption of AI without adequate guardrails or shared ethical norms and regulations poses a significant concern. As industries navigate this uncharted territory, it is imperative to approach AI deployment with a mindful effort to establish robust ethical and frameworks.  

Downloads and Resources

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now
Tags
Share