Q&A Navigating the Legal and Ethical Landscape of AI in Healthcare
Tom O’Neil is a BRG managing director and leads the firm’s Governance, Risk & Compliance (GRC) practice. He has broad private- and public-sector experience including leadership roles in boardrooms and C-suites of companies in the health sector.
Amy Worley is a managing director and data protection officer at BRG. She is an expert in global data and data protection regulation, data governance, and data ethics, including the growing field of artificial intelligence (AI) regulation.
Tom and Amy recently discussed the impact of AI in the health sector and challenges and opportunities this technology presents. Below is an excerpt from their conversation.
With the dramatic surge in interest and investment in healthcare AI, what legal and ethical challenges do you see on the horizon in the healthcare sector?
Amy: That is a tough question because AI is being used to help improve patient care and outcomes in many ways. However, for the sake of this conversation I’ll focus on what this could mean for how we diagnose and treat patients.
I work closely with innovative medical device companies. These technologies have become increasingly sophisticated, collecting and feeding data back to clinicians to analyze and make decisions on treatment—it’s very much a human-led process. Legally and ethically, this “human-in-the-loop” is how these companies most commonly operate, but the technical ability exists in some cases for algorithms to make a diagnosis. It isn’t difficult to imagine algorithms playing a pivotal role in diagnosing and prescribing treatment for patients.
However, it does pose legal and ethical questions. How will the industry adjust, and what will patients come to expect as these technologies advance and improve?
Tom: This seems like a quantum leap in care delivery, but as someone who wears a fitness tracker and checks my data throughout the day, I can see how having access to this data can help inform doctors and patients alike. As we increasingly rely on these technologies, there needs to be a way to prioritize patient safety.
How can healthcare organizations balance the promise of AI to improve patient care and efficiency with the risks associated with its use?
Amy: Again, there are many possibilities, but if we think about electronic health records (EHRs) we can see significant improvements in care and efficiencies achieved by digitizing patient data. As we harness AI in EHRs, I can see what I’ll refer to as “micro adjustments” being made where we allow these technologies to make more recommendations to clinicians based on biometrics and patient and family history. This will help enable clinicians to evaluate their patients more quickly and thoroughly and make more informed decisions for diagnosis and treatment.
Tom: I can see how AI will play a key role here, but to ensure sound administration of care—and more importantly patient safety—organizations will need to proceed prudently and with a mission-driven focus.
Amy: I completely agree with you, Tom. I advise my clients that they can do precisely this by showing them their work. Processes must be well documented. There must be transparency and accountability to ensure that patient safety is paramount. The goal isn’t just to change care but improve it.
How can healthcare organizations identify and mitigate algorithmic biases?
Tom: Healthcare organizations have been going through a lot over the past few years, and providers find themselves facing fierce fiscal headwinds. Understandably, these organizations are looking for automation to help improve efficiency and quality of patient care. However, AI must be deployed thoughtfully and monitored to identify and counter bias.
Amy: Tom, you’re correct. This is something that executives need to take seriously. This happens with AI because many large language models (LLMs) “hallucinate.” What I mean is that LLMs can generate incorrect responses because they can lack the ability to accurately assess the reliability and currency of data. This can lead to potential inaccuracies in their outputs. This can be accounted for and overcome with rigorous human oversight, carefully training these systems with the right information and putting safeguards in place, especially measures like easily retrievable citations to underlying sources.
Amy WorleyIt isn’t difficult to imagine algorithms making a diagnosis and prescribing treatment for patients. However, it does pose legal and ethical questions.
What key considerations should compliance leaders, executives, and board members address as AI becomes increasingly integrated into healthcare?
Tom: Artificial intelligence has been around longer than most people realize but has only recently become mainstream vernacular. It reminds me of when the internet took off. The potential is mind-boggling, but there is also a spectrum of associated risks.
Amy: I like your reference to the invention of the internet. Satya Nadella said, “It’s the difference between life before and after the light bulb.” This technology is now very much top of mind with executives and board members, and they are looking for guidance on how to proceed.
Tom: I agree with you, Amy. Some guiding principles were included in BRG’s AI and the Future of Healthcare. The report surveyed healthcare executives and interviewed thought leaders and from that research made helpful recommendations, including:
- Engage proactively with regulators. Collaborate and be part of the discussion to help shape the developed regulatory framework.
- Advance thoughtfully. Ensure AI is helping to solve meaningful problems for your organization.
- Track and adapt to new technologies. Monitoring AI advances can better position your organization in the future.
- Have a robust governance model. Establish a strong interdisciplinary governance model—don’t assume IT is solely responsible for managing this.
- Build on existing AI efforts. Continue to build on your investments and target AI solutions that solve specific challenges.