Recognizing the emergence of large language models (LLMs) and generative AI, our spotlight section this month focuses on articles which appeared the Journal of Medical Ethics, published online on January 23rd 2024. The articles address the use of LLMs and generative AI in informed consent procedures and, more broadly, the use of this technology within the medical ethics space.
In the editorial by Zohny et al, Generative AI and medical ethics: the state of play, the authors provide an overview of how LLMs are being used in medical ethics currently, and note that the technology lacks the maturity for nuanced ethical decision making at this time.
Allen et al. address the potential for LLMs to be used to facilitate surgical consent transactions with patients in Consent-GPT: is it ethical to delegate procedural consent to conversational AI? The authors raise several concerns with this practice, including the risk of misinformation, the absence of trust that one might have in the doctor-patient relationship, the potential for ‘click-through’ consent rather than fulsome consent, and the lack of clarity surrounding who has responsibility for an LLM informed consent transaction.
In Assessing the performance of ChatGPT in bioethics: a large language model’s moral compass in medicine, Chen et al. assess that LLMs have the potential to address certain aspects of medical ethics that required social intelligence but struggled in nuanced areas such as informed consent transactions.
Finally, Balas et al. found in Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT- 4 that when faced with ethical decision making, LLMs were able to articulate the principled issues at hand but without much understanding or depth to what each issue might translate to in terms of patient experience LLMs tested were also unable to integrate ethical and legal concepts in a satisfactory manner. The authors believe that at the moment AI may be used to compliment, but not replace, healthcare practitioner involvement in informed consent transactions.
Generative AI and medical ethics: the state of play
Editorial
Hazem Zohny, Sebastian Porsdam Mann, Brian D Earp, John McMillan
Journal of Medical Ethics, 23 January 2024
Excerpt
Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards certain ends. With each passing week, it seems more and more inevitable that LLMs will be a pervasive feature of many, if not most, of our lives. It would not be possible—and would not be desirable—to prohibit them across the board. We need to learn how to live with LLMs; to identify and mitigate the risks they pose to us, to our fellow creatures, and the environment; and to harness and guide their powers to better ends. This will require thoughtful regulation, sustained cooperation across nations, cultures and fields of inquiry; and all of this must be grounded in good ethics…
Consent-GPT: is it ethical to delegate procedural consent to conversational AI?
Current Controversy
Jemima Winifred Allen, Brian D Earp, Julian Koplin, Dominic Wilkinson
Journal of Medical Ethics, 23 January 2024
Abstract
Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients’ access to the relevant procedural information and therefore enhance informed decision-making.
In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.
Assessing the performance of ChatGPT in bioethics: a large language model’s moral compass in medicine
Original research
Jamie Chen, Angelo Cadiente, Lora J Kasselman, Bryan Pilkington
Journal of Medical Ethics, 23 January 2024
Abstract
Chat Generative Pre-Trained Transformer (ChatGPT) has been a growing point of interest in medical education yet has not been assessed in the field of bioethics. This study evaluated the accuracy of ChatGPT-3.5 (April 2023 version) in answering text-based, multiple choice bioethics questions at the level of US third-year and fourth-year medical students. A total of 114 bioethical questions were identified from the widely utilised question banks UWorld and AMBOSS. Accuracy, bioethical categories, difficulty levels, specialty data, error analysis and character count were analysed. We found that ChatGPT had an accuracy of 59.6%, with greater accuracy in topics surrounding death and patient–physician relationships and performed poorly on questions pertaining to informed consent. Of all the specialties, it performed best in paediatrics. Yet, certain specialties and bioethical categories were under-represented. Among the errors made, it tended towards content errors and application errors. There were no significant associations between character count and accuracy. Nevertheless, this investigation contributes to the ongoing dialogue on artificial intelligence’s (AI) role in healthcare and medical education, advocating for further research to fully understand AI systems’ capabilities and constraints in the nuanced field of medical bioethics.
Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4
Original Research
Michael Balas, Jordan Joseph Wadden, Philip C Hébert, Eric Mathison, Marika D Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A Crawford, Parnian Arjmand, Edsel B Ing
Journal of Medical Ethics, 23 January 2024
Abstract
Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight ethical vignettes.
The main outcomes measured were relevance, reasoning, depth, technical and non-technical clarity, as well as acceptability of GPT-4’s responses. The readability of the responses was also assessed. Of the six metrics evaluating the effectiveness of GPT-4’s responses, the overall mean score was 4.1/5. GPT-4 was rated highest in providing technical (4.7/5) and non-technical clarity (4.4/5), whereas the lowest rated metrics were depth (3.8/5) and acceptability (3.8/5). There was poor-to-moderate inter-rater reliability characterised by an intraclass coefficient of 0.54 (95% CI: 0.30 to 0.71). Based on panellist feedback, GPT-4 was able to identify and articulate key ethical issues but struggled to appreciate the nuanced aspects of ethical dilemmas and misapplied certain moral principles.
This study reveals limitations in the ability of GPT-4 to appreciate the depth and nuanced acceptability of real-world ethical dilemmas, particularly those that require a thorough understanding of relational complexities and context-specific values. Ongoing evaluation of LLM capabilities within medical ethics remains paramount, and further refinement is needed before it can be used effectively in clinical settings.