Informed Consent: A Monthly Review
_________________

March 2024 :: Issue 63

This digest aggregates and distills key content addressing informed consent from a broad spectrum of peer-reviewed journals and grey literature, and from various practice domains and organization types including international agencies, INGOs, governments, academic and research institutions, consortiums and collaborations, foundations, and commercial organizations. We acknowledge that this scope yields an indicative and not an exhaustive digest product.

Informed Consent: A Monthly Review is a service of the Center for Informed Consent Integrity, a program of the GE2P2 Global Foundation. The Foundation is solely responsible for its content. Comments and suggestions should be directed to:

Editor
Paige Fitzsimmons, MA
Associate Director, Center for Informed Consent Integrity
GE2P2 Global Foundation
paige.fitzsimmons@ge2p2global.org

PDF Version: Center for Informed Consent Integrity – A Monthly Review_March 2024

Spotlight Articles

Recognizing the emergence of large language models (LLMs) and generative AI, our spotlight section this month focuses on articles which appeared the Journal of Medical Ethics, published online on January 23rd 2024. The articles address the use of LLMs and generative AI in informed consent procedures and, more broadly, the  use of this technology within the medical ethics space.

In the editorial by Zohny et al, Generative AI and medical ethics: the state of play, the authors provide an overview of how LLMs are being used in medical ethics currently, and note that the technology lacks the maturity for nuanced ethical decision making at this time.

Allen et al. address the potential for LLMs to be used to facilitate surgical consent transactions with patients in Consent-GPT: is it ethical to delegate procedural consent to conversational AI? The authors raise several concerns with this practice, including the risk of misinformation, the absence of trust that one might have in the doctor-patient relationship, the potential for ‘click-through’ consent rather than fulsome consent, and the lack of clarity surrounding who has responsibility for an LLM informed consent transaction.

In Assessing the performance of ChatGPT in bioethics: a large language model’s moral compass in medicine, Chen et al. assess that LLMs have the potential to address certain aspects of medical ethics that required social intelligence but struggled in nuanced areas such as informed consent transactions.

Finally, Balas et al. found in Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT- 4 that when faced with ethical decision making, LLMs were able to articulate the principled issues at hand but without much understanding or depth to what each issue might translate to in terms of patient experience LLMs tested were also unable to integrate ethical and legal concepts in a satisfactory manner. The authors believe that at the moment AI may be used to compliment, but not replace, healthcare practitioner involvement in informed consent transactions.

Generative AI and medical ethics: the state of play
Editorial
Hazem Zohny, Sebastian Porsdam Mann, Brian D Earp, John McMillan
Journal of Medical Ethics, 23 January 2024
Excerpt
Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards certain ends. With each passing week, it seems more and more inevitable that LLMs will be a pervasive feature of many, if not most, of our lives. It would not be possible—and would not be desirable—to prohibit them across the board. We need to learn how to live with LLMs; to identify and mitigate the risks they pose to us, to our fellow creatures, and the environment; and to harness and guide their powers to better ends. This will require thoughtful regulation, sustained cooperation across nations, cultures and fields of inquiry; and all of this must be grounded in good ethics…

Consent-GPT: is it ethical to delegate procedural consent to conversational AI?
Current Controversy
Jemima Winifred Allen, Brian D Earp, Julian Koplin, Dominic Wilkinson
Journal of Medical Ethics, 23 January 2024
Abstract
    Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of ways. One possible solution to this clinical dilemma is through the use of conversational artificial intelligence using large language models (LLMs). There is considerable interest in the potential benefits of such models in medicine. For delegated procedural consent, LLM could improve patients’ access to the relevant procedural information and therefore enhance informed decision-making.
In this paper, we first outline a hypothetical example of delegation of consent to LLMs prior to surgery. We then discuss existing clinical guidelines for consent delegation and some of the ways in which current practice may fail to meet the ethical purposes of informed consent. We outline and discuss the ethical implications of delegating consent to LLMs in medicine concluding that at least in certain clinical situations, the benefits of LLMs potentially far outweigh those of current practices.

Assessing the performance of ChatGPT in bioethics: a large language model’s moral compass in medicine
Original research
Jamie Chen, Angelo Cadiente, Lora J Kasselman, Bryan Pilkington
Journal of Medical Ethics, 23 January 2024
Abstract
Chat Generative Pre-Trained Transformer (ChatGPT) has been a growing point of interest in medical education yet has not been assessed in the field of bioethics. This study evaluated the accuracy of ChatGPT-3.5 (April 2023 version) in answering text-based, multiple choice bioethics questions at the level of US third-year and fourth-year medical students. A total of 114 bioethical questions were identified from the widely utilised question banks UWorld and AMBOSS. Accuracy, bioethical categories, difficulty levels, specialty data, error analysis and character count were analysed. We found that ChatGPT had an accuracy of 59.6%, with greater accuracy in topics surrounding death and patient–physician relationships and performed poorly on questions pertaining to informed consent. Of all the specialties, it performed best in paediatrics. Yet, certain specialties and bioethical categories were under-represented. Among the errors made, it tended towards content errors and application errors. There were no significant associations between character count and accuracy. Nevertheless, this investigation contributes to the ongoing dialogue on artificial intelligence’s (AI) role in healthcare and medical education, advocating for further research to fully understand AI systems’ capabilities and constraints in the nuanced field of medical bioethics.

Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4
Original Research
Michael Balas, Jordan Joseph Wadden, Philip C Hébert, Eric Mathison, Marika D Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A Crawford, Parnian Arjmand, Edsel B Ing
Journal of Medical Ethics, 23 January 2024
Abstract
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight ethical vignettes.
The main outcomes measured were relevance, reasoning, depth, technical and non-technical clarity, as well as acceptability of GPT-4’s responses. The readability of the responses was also assessed. Of the six metrics evaluating the effectiveness of GPT-4’s responses, the overall mean score was 4.1/5. GPT-4 was rated highest in providing technical (4.7/5) and non-technical clarity (4.4/5), whereas the lowest rated metrics were depth (3.8/5) and acceptability (3.8/5). There was poor-to-moderate inter-rater reliability characterised by an intraclass coefficient of 0.54 (95% CI: 0.30 to 0.71). Based on panellist feedback, GPT-4 was able to identify and articulate key ethical issues but struggled to appreciate the nuanced aspects of ethical dilemmas and misapplied certain moral principles.
This study reveals limitations in the ability of GPT-4 to appreciate the depth and nuanced acceptability of real-world ethical dilemmas, particularly those that require a thorough understanding of relational complexities and context-specific values. Ongoing evaluation of LLM capabilities within medical ethics remains paramount, and further refinement is needed before it can be used effectively in clinical settings.

Meta-Health. Using The Metaverse To Facilitate The Understanding Of The Patient’s Informed Consent And Their Perioperative Process

Meta-Health. Using The Metaverse To Facilitate The Understanding Of The Patient’s Informed Consent And Their Perioperative Process
L Sánchez-Guillén, C Rebollo-Santamaría, N Montaña-Miranda, M Pérez-Berenguer, P Martínez-Galisteo, C Lillo-García, F López-Rodríguez-Arias, A Arroyo
British Journal of Surgery, 9 February 2024; supplement 1
Abstract
Introduction
The informed consent (IC) process can sometimes result in a lack of understanding due to the technical nature of the information provided. Coupled with the physical disorientation within the hospital, this can lead to perioperative anxiety and a greater risk of complications.
Methods
META-health is an application for mobile devices that seeks to improve these deficiencies using metaverse and extended reality visualization technologies. By recreating the hospital environment in a virtual setting, it allows repeating the preoperative consultation and visualization of various hospital areas, including the hospitalization floor, operating room, and consultation spaces. Multimedia content is included so that the information can be more easily understood. In addition, gamification has been implemented as an option to integrate entertainment into the understanding process of stomatal management.
Results
The pilot application was modeled and developed, and the first focus group was held to assess the quality, management, orientation, and comprehension of content. The sample of 12 patients indicated proposals for improvement and functional errors: 66.7% required control of the explanatory videos, 75% requested information on ostomies and difficulties in handling the joystick. The application is currently focused on colorectal cancer patients and intends to expand to other conditions and functionalities to become a functional metaverse. Finally, 100% of participants confirmed the usefulness of the idea and the importance of receiving information and support throughout the process.
Conclusion
The use of extended reality and the metaverse improves the understanding of informed consent (IC) by patients and relatives, as well as decreases perioperative anxiety.

AI-Enhanced Healthcare: Not a new Paradigm for Informed Consent

AI-Enhanced Healthcare: Not a new Paradigm for Informed Consent
M. Pruski
Journal of Bioethical Inquiry, 1 February 2024
Abstract
With the increasing prevalence of artificial intelligence (AI) and other digital technologies in healthcare, the ethical debate surrounding their adoption is becoming more prominent. Here I consider the issue of gaining informed patient consent to AI-enhanced care from the vantage point of the United Kingdom’s National Health Service setting. I build my discussion around two claims from the World Health Organization: that healthcare services should not be denied to individuals who refuse AI-enhanced care and that there is no precedence to seeking patient consent to AI-enhanced care. I discus U.K. law relating to patient consent and the General Data Protection Regulation to show that current standards relating to patient consent are adequate for AI-enhanced care. I then suggest that in the future it may not be possible to guarantee patient access to non-AI-enhanced healthcare, in a similar way to how we do not offer patients manual alternatives to automated healthcare processes. Throughout my discussion I focus on the issues of patient choice and veracity in the patient–clinician relationship. Finally, I suggest that the best way to protect patients from potential harms associated with the introduction of AI to patient care is not via an overly burdensome patient consent process but via evaluation and regulation of AI technologies.

Autonomy and Informed Consent: Ensuring patients and their families are well-informed about AI-assisted decisions

Autonomy and Informed Consent: Ensuring patients and their families are well-informed about AI-assisted decisions
Elisha Blessing, Kaledio Potter, Hubert Klaus
Research Gate, 1 February 2024
Abstract
    The integration of Artificial Intelligence (AI) into healthcare presents significant opportunities for improving patient outcomes, personalizing treatments, and enhancing diagnostic accuracy. However, this technological advancement also raises crucial ethical considerations, particularly concerning patient autonomy and informed consent. As AI-assisted decision-making becomes more prevalent in healthcare settings, ensuring that patients and their families are well-informed about the nature, benefits, and risks of AI interventions is paramount. This paper explores the challenges and strategies associated with maintaining autonomy and securing informed consent in the era of AI-assisted healthcare. We delve into the significance of patient autonomy as a fundamental principle of medical ethics and the complexities introduced by AI technologies that may challenge this autonomy.    The paper highlights the crucial role of healthcare providers in supporting informed decision-making by patients, emphasizing the need for clear communication about AI’s capabilities and limitations. Further, we address the traditional principles of informed consent and how they are complicated by the integration of AI in healthcare. These complexities include the difficulty of explaining AI’s decision-making processes and ensuring patients understand the implications of AI-assisted treatments. We propose strategies for enhancing patient and family understanding of AI-assisted decisions, including educational programs, training for healthcare professionals, and the development of patient-centered AI explanations.

The paper also reviews case studies and examples of successful implementations of AI in healthcare, providing insights into best practices and the impact on patient satisfaction and trust. Additionally, we examine the ethical and legal frameworks governing AI in healthcare, identifying the need for updated policies to address AI-specific issues and international perspectives on autonomy and informed consent. Challenges such as biases in AI algorithms, privacy and security of patient data, and overcoming skepticism and fear of technology are discussed, alongside future directions anticipated in AI development that may impact patient care and consent processes.

The paper concludes with a call to action for ongoing dialogue, research, and policy development to ensure that the ethical principles of autonomy and informed consent are upheld in the rapidly evolving landscape of AI in healthcare. This exploration underscores the importance of ensuring that patients and their families are adequately informed, empowering them to make decisions that align with their values and preferences in the context of AI-assisted healthcare.

Telehealth and AI: An Ethical Examination of Remote Healthcare Services and the Implications for Patient Care and Privacy

Telehealth and AI: An Ethical Examination of Remote Healthcare Services and the Implications for Patient Care and Privacy
Andi Saputra, Siti Aminah
Quarterly Journal of Computational Technologies for Healthcare, 6 January 2024
Abstract
Background
The integration of artificial intelligence (AI) in telehealth has revolutionized healthcare delivery, offering unprecedented opportunities for remote diagnosis, treatment, and patient monitoring. This research aims to critically examine the ethical implications of this technological convergence.
Objective
To explore the ethical dimensions of AI-enhanced telehealth, focusing on accessibility, quality of care, patient privacy, data security, informed consent, regulatory challenges, and the long-term societal impacts.
Methods
The study employs a comprehensive literature review and ethical analysis framework, examining current practices, patient outcomes, and regulatory policies related to AI in telehealth.
Results
The findings highlight the potential of AI-enhanced telehealth in increasing healthcare accessibility, especially in remote and underserved areas. However, challenges such as digital divide, data privacy concerns, and the risk of algorithmic bias are identified as key ethical issues. The lack of comprehensive regulatory frameworks and standards for AI in healthcare poses significant challenges in ensuring equitable and safe care. Furthermore, the study underscores the importance of informed consent in the context of AI-driven healthcare services.
Conclusion
While AI-enhanced telehealth offers significant benefits in healthcare delivery, it raises critical ethical concerns that must be addressed. Ensuring equitable access, safeguarding patient privacy, maintaining the quality of care, and developing robust regulatory frameworks are essential for the responsible integration of AI in telehealth services. Future research should focus on developing ethical guidelines and policies that keep pace with technological advancements in healthcare.

Reshaping consent so we might improve participant choice (III) – How is the research participant’s understanding currently checked and how might we improve this process?

Reshaping consent so we might improve participant choice (III) – How is the research participant’s understanding currently checked and how might we improve this process?
Research Article
Hugh Davies, Simon E Kolstoe, Anthony Lockett
Research Ethics, 24 February 2024
Open Access
Abstract
Valid consent requires the potential research participant understands the information provided. We examined current practice in 50 proposed Clinical Trials of Investigational Medicinal Products to determine how this understanding is checked. The majority of the proposals (n = 44) indicated confirmation of understanding would take place during an interactive conversation between the researcher and potential participant, containing questions to assess and establish understanding. Yet up until now, research design and review have not focused upon this, concentrating more on written material. We propose ways this interactive conversation can be documented, and the process of checking understanding improved.

Practical approaches for supporting informed consent in neonatal clinical trials

Practical approaches for supporting informed consent in neonatal clinical trials
Mini Review
Susan H. Wootton, Matthew Rysavy, Peter Davis, Marta Thio, Mar Romero-Lopez, Lindsay F. Holzapfel, Tamara Thrasher, Jaleesa D. Wade, Louise Owen
Acta Paediatrica, 22 February 2024
Abstract
The survival and health of preterm and critically ill infants have markedly improved over the past 50  years, supported by well-conducted neonatal research. However, newborn research is difficult to undertake for many reasons, and obtaining informed consent for research in this population presents several unique ethical and logistical challenges. In this article, we explore methods to facilitate the consent process, including the role of checklists to support meaningful informed consent for neonatal clinical trials.
Conclusion
The authors provide practical guidance on the design and implementation of an effective consent checklist tailored for use in neonatal clinical research.

Editor’s note: Acta Paediatrica is a peer-reviewed monthly journal published on behalf of the Foundation Acta Paediatrica based at the Karolinska Institute in Sweden.

The Donation of Human Biological Material for Brain Organoid Research: The Problems of Consciousness and Consent

The Donation of Human Biological Material for Brain Organoid Research: The Problems of Consciousness and Consent
Masanori Kataoka, Christopher Gyngell, Julian Savulescu, Tsutomu Sawa
Science and Engineering Ethics, 5 February 2024
Abstract
Human brain organoids are three-dimensional masses of tissues derived from human stem cells that partially recapitulate the characteristics of the human brain. They have promising applications in many fields, from basic research to applied medicine. However, ethical concerns have been raised regarding the use of human brain organoids. These concerns primarily relate to the possibility that brain organoids may become conscious in the future. This possibility is associated with uncertainties about whether and in what sense brain organoids could have consciousness and what the moral significance of that would be. These uncertainties raise further concerns regarding consent from stem cell donors who may not be sufficiently informed to provide valid consent to the use of their donated cells in human brain organoid research. Furthermore, the possibility of harm to the brain organoids raises question about the scope of the donor’s autonomy in consenting to research involving these entities. Donor consent does not establish the reasonableness of the risk and harms to the organoids, which ethical oversight must ensure by establishing some measures to mitigate them. To address these concerns, we provide three proposals for the consent procedure for human brain organoid research. First, it is vital to obtain project-specific consent rather than broad consent. Second, donors should be assured that appropriate measures will be taken to protect human brain organoids during research. Lastly, these assurances should be fulfilled through the implementation of precautionary measures. These proposals aim to enhance the ethical framework surrounding human brain organoid research.

Blockchain-Based Dynamic Consent and its Applications for Patient-Centric Research and Health Information Sharing: Protocol for an Integrative Review

Blockchain-Based Dynamic Consent and its Applications for Patient-Centric Research and Health Information Sharing: Protocol for an Integrative Review
Wendy M Charles, Mark B van der Waal, Joost Flach, Arno Bisschop, Raymond X van der Waal, Hadil Es-Sbai, Christopher J McLeod
JMIR Research Protocols, 5 February 2024
Abstract
Background
Blockchain has been proposed as a critical technology to facilitate more patient-centric research and health information sharing. For instance, it can be applied to coordinate and document dynamic informed consent, a procedure that allows individuals to continuously review and renew their consent to the collection, use, or sharing of their private health information. Such has been suggested to facilitate ethical, compliant longitudinal research, and patient engagement. However, blockchain-based dynamic consent is a relatively new concept, and it is not yet clear how well the suggested implementations will work in practice. Efforts to critically evaluate implementations in health research contexts are limited.
Objective
The objective of this protocol is to guide the identification and critical appraisal of implementations of blockchain-based dynamic consent in health research contexts, thereby facilitating the development of best practices for future research, innovation, and implementation.
Methods
The protocol describes methods for an integrative review to allow evaluation of a broad range of quantitative and qualitative research designs. The PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols) framework guided the review’s structure and nature of reporting findings. We developed search strategies and syntax with the help of an academic librarian. Multiple databases were selected to identify pertinent academic literature (CINAHL, Embase, Ovid MEDLINE, PubMed, Scopus, and Web of Science) and gray literature (Electronic Theses Online Service, ProQuest Dissertations and Theses, Open Access Theses and Dissertations, and Google Scholar) for a comprehensive picture of the field’s progress. Eligibility criteria were defined based on PROSPERO (International Prospective Register of Systematic Reviews) requirements and a criteria framework for technology readiness. A total of 2 reviewers will independently review and extract data, while a third reviewer will adjudicate discrepancies. Quality appraisal of articles and discussed implementations will proceed based on the validated Mixed Method Appraisal Tool, and themes will be identified through thematic data synthesis.
Results
Literature searches were conducted, and after duplicates were removed, 492 articles were eligible for screening. Title and abstract screening allowed the removal of 312 articles, leaving 180 eligible articles for full-text review against inclusion criteria and confirming a sufficient body of literature for project feasibility. Results will synthesize the quality of evidence on blockchain-based dynamic consent for patient-centric research and health information sharing, covering effectiveness, efficiency, satisfaction, regulatory compliance, and methods of managing identity.
Conclusions
The review will provide a comprehensive picture of the progress of emerging blockchain-based dynamic consent technologies and the rigor with which implementations are approached. Resulting insights are expected to inform best practices for future research, innovation, and implementation to benefit patient-centric research and health information sharing.