The impact that generative artificial intelligence and machine learning tools could have on the healthcare industry — to the benefit of providers, patients, and insurers — may be very substantial
With the prevalence of generative artificial intelligence (AI) and machine learning tools, healthcare providers, patients, and health insurers may all benefit from the efficiencies and improved treatment outcomes these tools can provide; however, there are also some risks that individuals and entities should consider when implementing these innovative tools in healthcare.
Providing healthcare services
The use of AI in healthcare continues to gain momentum with studies confirming its effectiveness in diagnosing some chronic illnesses, increasing staff efficiency, and improving the quality of care while optimizing resources. In fact, AI is already being used in healthcare to help diagnose patients, for drug “discovery and development,” to improve physician-patient communication, and to transcribe medical documents.
Because there are typically large data sets available, including images, that can be applied to well-defined problems, AI has successfully diagnosed conditions requiring visual comparisons. For example, Google developed and trained AI to diagnose and grade diabetic retinopathy. It diagnosed patients quickly, served as a second opinion for ophthalmologists, detected the condition earlier, and reduced barriers to access. Now, researchers at Stanford have developed an algorithm that can review X-rays to detect 14 pathologies in “just a few seconds.”
The use of AI assistants and chatbots also can improve patient experience by helping patients find available physicians, schedule appointments, and even answer some patient questions.
Access to these tools can also assist physicians in identifying treatment protocols, clinical tools, and appropriate drugs more efficiently. Providers are also taking advantage of AI to document patient encounters in near real-time. Not only does this improve the documentation, but it can increase efficiency and reduce provider frustration with the time-consuming documentation tasks. Not surprisingly, some hospitals and providers also are using AI tools to verify health insurance coverage and prior authorization of procedures, which can reduce unpaid claims.
Although AI has demonstrated that it is as accurate in diagnosing conditions or recommending treatment protocols, 60% of Americans said they would be uncomfortable if their healthcare provider relied on AI to diagnose conditions or recommend treatments, according to a Pew Research Center poll. Concerns that AI would make the patient-provider relationship worse was a factor for 57% of respondents, according to the poll, while only 38% said they thought AI would “lead to better health outcomes.”
Racial and gender bias
Beyond concerns about the effectiveness of AI, there are also concerns about the potential for bias in the underlying algorithms. Some studies have found race-based discrepancies in the algorithms and limitations due to the lack of healthcare data for women and minority populations.
In a May 2022 report on the impact of race and ethnicity in healthcare, Deloitte identified the need to reevaluate long-standing clinical algorithms to help ensure that all patients receive the care they need. Deloitte recommended forming teams to evaluate clinical algorithms, how race is used in the algorithm, and whether “race is justified.”
The Deloitte report also identified “long-standing issues around the collection and use of race and ethnicity data in health care — due to both lack of standards and misconceptions.” The report noted Centers for Disease Control and Prevention findings that race and ethnicity data were not available “for nearly 40% of people testing positive for COVID-19 or receiving a vaccine.”
The American Medical Association (AMA) has identified key points for the development and use of AI in healthcare that emphasize the use of population-representative data and takes steps to address explicit and implicit bias and transparency in the use of AI for healthcare. The AMA also encourages the use of augmented AI rather than fully autonomous AI tools.
Regulators have also taken notice of the potential for bias in healthcare AI. California Attorney General Rob Bonta sent letters to 30 hospital CEOs across the state last year “requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in commercial decision-making tools.” The letters are the first step in an investigation into whether commercial healthcare algorithms have discriminatory impacts based on race and ethnicity.
In contrast to these findings, the Pew Research Center poll found that “among the majority of Americans who see a problem with racial and ethnic bias in health care,” a majority (51%) thought the problem of “bias and unfair treatment” would improve with the use of AI.
Privacy of health data
The sharing of private health data to train and use AI tools is another serious concern. Training AI algorithms requires access to vast amounts of underlying data while the use of the tools creates a risk of exposure of such data either because the tool memorizes and retains the information or because third-party vendors may be exposed to data breaches.
Although many AI tools are developed in academic research centers, partnering with private-sector companies is often the only way to commercialize the research. At times, these partnerships have resulted in the poor protection of privacy and cases in which patients were not always given control over the use of their information or were not fully informed about the privacy impacts.
Studies have also found that AI tools can re-identify individuals whose data is held in health data repositories even when the data has been anonymized and scrubbed of all identifiers. In some instances, the AI can not only re-identify the individual, it can make sophisticated guesses about the individual’s non-health data.
Healthcare entities and their third-party vendors are particularly vulnerable to data breaches and ransomware attacks. The healthcare industry, which is especially vulnerable to attack, also reported the most expensive data breaches, with an average cost of $10.93 million, according to IBM Security’s Cost of a Data Breach Report for 2023.
As with most privacy issues, states are leading the way in the effort to protect individual privacy as AI use expands in healthcare. Currently, 10 states have AI-related regulations as part of their larger consumer privacy laws; however, only a handful of states have proposed legislation specific to the privacy of data or the use of AI in healthcare.
As the use of AI expands in healthcare, all parties involved in the process must be aware of and work to avoid the known risks of bias or loss of privacy. With awareness of the risks, the benefits for patients and providers could be vast.