Your privacy, your choice

We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media.

By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection.

See our privacy policy for more information on the use of your personal data.

for further information and to change your choices.

Skip to main content

Navigating the integration of large language models in healthcare: challenges, opportunities, and implications under the EU AI Act

Introduction

In the rapidly evolving field of artificial intelligence (AI), large language models (LLMs) have emerged as powerful tools with transformative potential across healthcare sectors. These technologies promise enhanced diagnostic capabilities, patient education, and operational efficiencies. However, their integration into clinical practice is not without challenges, particularly in the context of stringent regulatory frameworks like the European Union’s AI Act [1, 2]. This editorial explores the juxtaposition of innovation and regulation, offering insights into how healthcare professionals can navigate these dynamics responsibly.

Discussion

Transformative potential of LLMs in healthcare

LLMs are increasingly recognized for their ability to process vast datasets and generate human-like text, with applications spanning medical diagnostics, administrative tasks, and patient engagement [3]. From simplifying radiology reports to drafting discharge summaries, these tools streamline workflows while improving patient comprehension [4, 4,5,6,7]. Furthermore, they hold promise in drug discovery and personalized medicine, fostering innovation at an unprecedented scale [8].

LLMs can also democratize access to healthcare knowledge. By generating plain-language explanations of complex medical concepts, these models empower patients and support clinicians in underserved areas. In resource-limited settings, LLMs can act as an accessible adjunct to healthcare professionals, mitigating the challenges of staffing shortages and expertise gaps [9].

However, these advancements must be balanced against inherent limitations. Their reliance on training datasets raises concerns about the quality, representativeness, and biases of the data, which can significantly impact outcomes. For instance, incorrect or oversimplified outputs may erode trust in healthcare systems or lead to adverse clinical decisions, underscoring the need for rigorous validation [10, 11].

The EU AI Act: a regulatory milestone

The European Union’s AI Act, poised for adoption in 2024, establishes a comprehensive regulatory framework for AI systems, categorizing them by risk level—unacceptable, high, and limited. For healthcare, this risk-based approach translates into heightened scrutiny of AI applications in critical areas like diagnostics and treatment planning [2].

Key provisions and their implications

Risk categorization and transparency

The Act mandates transparency for limited-risk systems and stringent requirements for high-risk applications. For healthcare providers, this ensures that AI tools are used with informed oversight, fostering trust among clinicians and patients.

Prohibition of high-risk systems

AI systems deemed unacceptable, such as those involving biometric categorization or workplace emotion recognition, are explicitly banned. This safeguards fundamental rights and aligns with ethical principles integral to medical practice.

Governance and accountability

The establishment of regulatory sandboxes facilitates innovation while ensuring compliance. These controlled environments allow for the testing of AI tools in real-world scenarios, providing valuable insights without compromising ethical standards.

Ethical and legal challenges

While the AI Act offers a structured approach, its implementation poses significant challenges. Key issues include the following:

  1. i.

    Data privacy: LLMs process large datasets that may inadvertently include sensitive patient information, raising questions about data security and consent. Compliance with General Data Protection Regulation (GDPR) and related legislation are critical.

  2. ii.

    Bias and equity: Models trained on skewed datasets risk perpetuating healthcare disparities. Proactive measures are required to identify and mitigate such biases, particularly when deploying LLMs in multicultural settings.

  3. iii.

    Intellectual property: The ownership of outputs generated by LLMs remains a contentious issue, particularly in collaborative medical research where authorship and credit must be carefully managed.

OpenAI’s privacy policies and healthcare considerations

The EU AI Act necessitates a revaluation of data handling practices by entities like OpenAI. While its privacy policies address general principles, greater specificity is required for high-risk sectors such as healthcare [12]. For instance as follows:

  1. i.

    Handling sensitive data: Clear guidelines on managing healthcare data are essential to ensure compliance with GDPR and other local regulations.

  2. ii.

    Transparency and user awareness: OpenAI must enhance disclosures regarding AI-generated content, particularly in clinical contexts where decisions may significantly impact patient outcomes.

  3. iii.

    Mitigation of risk: OpenAI should consider developing healthcare-specific safeguards, including limitations on the use of LLMs in critical care decisions without clinician oversight.

Fostering trust and collaboration

For LLMs to be effectively integrated into healthcare, fostering trust among all stakeholders is paramount. Patients, clinicians, and policymakers must be confident that AI tools are both safe and beneficial. This necessitates the following:

  1. i.

    Patient-centric AI: Models must prioritize patient welfare, including clear communication of their limitations and a robust mechanism for addressing errors.

  2. ii.

    Interdisciplinary collaboration: A collaborative approach involving engineers, ethicists, clinicians, and legal experts will ensure that LLMs are developed and deployed with comprehensive oversight.

  3. iii.

    Global standards: The fragmented nature of AI regulation highlights the need for international harmonization. As the EU leads with the AI Act, other nations must align their frameworks to ensure consistency and interoperability.

Balancing innovation and responsibility

The integration of LLMs in healthcare exemplifies the tension between technological progress and ethical responsibility. To achieve a sustainable balance as follows:

  1. i.

    Education and training: Healthcare professionals must be equipped with the skills to evaluate AI tools critically. Curricula should incorporate AI literacy, focusing on its applications, limitations, and ethical implications [13].

  2. ii.

    Continuous evaluation: AI tools should undergo ongoing assessments to ensure they meet evolving regulatory standards and clinical needs.

  3. iii.

    Encouraging innovation: Regulatory sandboxes and similar initiatives allow innovation to flourish while maintaining ethical oversight, creating a fertile ground for AI-driven advancements.

The role of healthcare leaders

Leadership in healthcare will play a pivotal role in determining the trajectory of AI integration. By advocating for responsible innovation, healthcare leaders can influence policy development, foster interdisciplinary collaboration, and guide the ethical deployment of LLMs. Their voices are crucial in shaping a future where AI enhances, rather than replaces, the human touch in medicine [14] (Table 1).

Table 1 Summary table: integration of large language models in healthcare

Conclusion

LLMs present a dual-edged sword: immense potential to enhance healthcare delivery paired with challenges that demand meticulous oversight. The EU AI Act provides a robust framework, but its success hinges on collaborative efforts among stakeholders. As healthcare professionals, embracing these tools responsibly will ensure they augment, rather than undermine, clinical excellence.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

AI:

Artificial intelligence

LLMs:

Large language models

EU:

European Union

M-LLMs:

Multimodal large language models

GDPR:

General Data Protection Regulation

References

  1. Wang D, Zhang S (2024) Large language models in medical and healthcare fields: applications, advances, and challenges. Artif Intell Rev 57:299. https://doi.org/10.1007/s10462-024-10921-0

    Article  Google Scholar 

  2. European Parliament. EU AI Act: first regulation on artificial intelligence. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Accessed 17 November 2024.

  3. Cascella M, Montomoli J, Bellini V, Bignami E (2023) Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J Med Syst 47(1):33. https://doi.org/10.1007/s10916-023-01925-4.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Jeblick K, Schachtner B, Dexl J, et al (2022) ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports. arXiv. 2212.14882. https://doi.org/10.48550/arXiv.2212.14882.

  5. Lyu Q, Tan J, Zapadka ME et al (2023) Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential. Vis Comput Ind Biomed Art 6(1):9. https://doi.org/10.1186/s42492-023-00136-5.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Reddy S (2023) Evaluating large language models for use in healthcare: a framework for translational value assessment. Informatics in Medicine Unlocked. 41:101304. ISSN 2352–9148. https://doi.org/10.1016/j.imu.2023.101304.

  7. Patel SB, Lam K (2023) ChatGPT: the future of discharge summaries? Lancet Digit Health 5(3):e107–e108. https://doi.org/10.1016/S2589-7500(23)00021-3.

    Article  CAS  PubMed  Google Scholar 

  8. Wang L, Wan Z, Ni C, Song Q, Li Y, Clayton EW, Malin BA, Yin Z (2024) A systematic review of ChatGPT and other conversational large language models in healthcare. medRxiv [Preprint]. 2024.04.26.24306390. https://doi.org/10.1101/2024.04.26.24306390.

  9. Nassiri K, Akhloufi MA (2024) Recent advances in large language models for healthcare. BioMedInformatics 4(2):1097–1143. https://doi.org/10.3390/biomedinformatics4020062.

    Article  Google Scholar 

  10. Jui TD, Rivas P (2024) Fairness issues, current approaches, and challenges in machine learning models. Int J Mach Learn & Cyber 15:3095–3125. https://doi.org/10.1007/s13042-023-02083-2.

    Article  Google Scholar 

  11. Cascella M, Bellini V, Montomoli J, Bignami E (2023) The power of evolution cannot be contained, so let it be. Minerva Anestesiol. https://doi.org/10.23736/S0375-9393.23.17484-0.

    Article  PubMed  Google Scholar 

  12. Open AI privacy policy. (https://openai.com/policies/eu-privacy-policy Accessed: 17 November 2024).

  13. Moldt JA, Festl-Wietek T, Fuhl W, et al (2024) Assessing AI awareness and identifying essential competencies: insights from key stakeholders in integrating AI into medical education. JMIR Med Educ. 10:e58355. Published 2024 Jun 12. https://doi.org/10.2196/58355.

  14. Sriharan A, Sekercioglu N, Mitchell C, et al (2024) Leadership for AI transformation in health care organization: scoping review. J Med Internet Res. 26:e54556. Published 2024 Aug 14. https://doi.org/10.2196/54556.

Download references

Acknowledgements

Not applicable

Funding

No funding was obtained for the present study.

Author information

Authors and Affiliations

Authors

Contributions

E.B., M.R., R.L., V.B. conception, writing and proofreading of the paper.

Corresponding author

Correspondence to Elena Bignami.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bignami, E., Russo, M., Lanza, R. et al. Navigating the integration of large language models in healthcare: challenges, opportunities, and implications under the EU AI Act. J Anesth Analg Crit Care 4, 79 (2024). https://doi.org/10.1186/s44158-024-00215-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s44158-024-00215-w