Multi-Center Pilot Study to Assess Provider Experience with an Artificial Intelligence Multimedia Informed Consent Platform

Multi-Center Pilot Study to Assess Provider Experience with an Artificial Intelligence Multimedia Informed Consent Platform
Usman Latif, Timothy Deer, Sean Li
Neuromodulation: Technology at the Neural Interface, October 2024
Introduction
Informed consent is a keystone of medical care, involving a thorough discussion of the procedure, risks and benefits involved, and reasonable expectations for patients. 24% of malpractice suits in the field of spine surgery are attributable to insufficient informed consent. Numerous studies indicate that informed consent is frequently inadequate. Multimedia aids have been shown to enhance patient comprehension and satisfaction. This study assessed experience at multiple centers with a platform that delivers multimedia videos and utilizes artificial intelligence to ensure patients view the entire video while maintaining attention before signing a digital consent. A permanent videographic record of the patient viewing the video is created for the purpose of reducing malpractice exposure.

Editor’s Note: After assessing the observations and arguments made in this paper, we are reaching out to the corresponding authors for clarification.

Patient Autonomy in Medical Education: Navigating Ethical Challenges in the Age of Artificial Intelligence

Patient Autonomy in Medical Education: Navigating Ethical Challenges in the Age of Artificial Intelligence
Review article
Hui Lu, Ahmad Alhaskawi, Yanzhao Dong, Xiaodi Zou, Haiying Zhou, Sohaib Hasan Abdullah Ezzi, Vishnu Goutham Kota, Mohamed Hasan Abdulla Hasan Abdulla, Sahar Ahmed Abdalbary
INQUIRY: The Journal of Health Care Organization, Provision, and Financing, 18 September 2024
Open access
Abstract
The increasing integration of Artificial Intelligence (AI) in the medical domain signifies a transformative era in healthcare, with promises of improved diagnostics, treatment, and patient outcomes. However, this rapid technological progress brings a concomitant surge in ethical challenges permeating medical education. This paper explores the crucial role of medical educators in adapting to these changes, ensuring that ethical education remains a central and adaptable component of medical curricula. Medical educators must evolve alongside AI’s advancements, becoming stewards of ethical consciousness in an era where algorithms and data-driven decision-making play pivotal roles in patient care. The traditional paradigm of medical education, rooted in foundational ethical principles, must adapt to incorporate the complex ethical considerations introduced by AI. This pedagogical approach fosters dynamic engagement, cultivating a profound ethical awareness among students. It empowers them to critically assess the ethical implications of AI applications in healthcare, including issues related to data privacy, informed consent, algorithmic biases, and technology-mediated patient care. Moreover, the interdisciplinary nature of AI’s ethical challenges necessitates collaboration with fields such as computer science, data ethics, law, and social sciences to provide a holistic understanding of the ethical landscape.

Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care

Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care
Target Article
Meghan E. Hurley, Benjamin H. Lang, Kristin Marie Kostick-Quenet, Jared N. Smith, Jennifer Blumenthal-Barby
The American Journal of Bioethics, 17 September 2024
Abstract
Given the need for enforceable guardrails for artificial intelligence (AI) that protect the public and allow for innovation, the U.S. Government recently issued a Blueprint for an AI Bill of Rights which outlines five principles of safe AI design, use, and implementation. One in particular, the right to notice and explanation, requires accurately informing the public about the use of AI that impacts them in ways that are easy to understand. Yet, in the healthcare setting, it is unclear what goal the right to notice and explanation serves, and the moral importance of patient-level disclosure. We propose three normative functions of this right: (1) to notify patients about their care, (2) to educate patients and promote trust, and (3) to meet standards for informed consent. Additional clarity is needed to guide practices that respect the right to notice and explanation of AI in healthcare while providing meaningful benefits to patients.

Meaningful and Informed Consent and Satisfactory Education for Physicians Using AI in Radiology

Meaningful and Informed Consent and Satisfactory Education for Physicians Using AI in Radiology
Katerina Kapotas Tapas
Journal of Health Care Compliance, 2024
Abstract
The article explores the use of artificial intelligence (AI) in radiology and the challenges it poses in terms of transparency and understanding. It emphasizes the importance of human oversight and control over AI systems, as well as the principles of prudence, human autonomy, and responsibility. The article also highlights the need for informed consent and adequate education and training for physicians using AI. While AI has the potential to improve diagnostic accuracy and patient outcomes in radiology, ethical considerations must be carefully addressed. The text includes a list of sources and covers various topics related to Medicare Advantage, improper payments, compliance audits, online trackers, HIPAA compliance, challenges in healthcare operations, business formation, and compliance program fundamentals. It provides factual information and insights without expressing personal judgments or opinions.

Safeguarding Data Privacy and Informed Consent: Ethical Imperatives in AI-Driven Mental Healthcare

Safeguarding Data Privacy and Informed Consent: Ethical Imperatives in AI-Driven Mental Healthcare
Souvik Dhar, Utsa Sarkar
Intersections of Law and Computational Intelligence in Health Governance, 2024 [IGI Global]
Abstract
This chapter explores the ethical challenges surrounding data privacy and informed consent in artificial intelligence (AI)-driven mental healthcare in India. The integration of AI technologies in mental health services offers potential for enhanced patient outcomes, but also raises significant ethical issues. Emphasizing patient autonomy and the protection of personal information, the chapter examines the complexities of informed consent and data privacy. It highlights the importance of transparent communication and robust regulatory frameworks to safeguard patient rights. By analysing key principles and the Digital Personal Data Protection Act 2023, the chapter provides insights into balancing technological advancements with ethical imperatives. It advocates for comprehensive ethical guidelines and collaborative efforts among stakeholders to foster a responsible and patient-centred AI-driven mental healthcare system. The chapter emphasizes the need for education on AI’s impact, potential biases, and the significance of maintaining trust and accountability in patient care.

Artificial intelligence and the law of informed consent

Artificial intelligence and the law of informed consent
Book Chapter
Glenn Cohen, Andrew Slottje
Research Handbook on Health, AI and the Law, 16 July 2024 [Elgar]
Introduction
A patient is diagnosed with stage I non-small-cell lung cancer. The patient’s physician recommends surgery and adjuvant chemotherapy, explaining the benefits and risks of each. The physician does not explain, however, that standard treatment guidelines for the patient would counsel against chemotherapy, and that more aggressive treatment has been recommended for the patient by an artificial intelligence (AI) system based on the patient’s imaging data. Only after the course of treatment is completed does the patient learn of the AI’s involvement in the care decision. The patient is distressed that, as he sees it, he underwent a potentially unnecessary treatment because his physician outsourced decision-making to a machine without letting him know.

Artificial Intelligence as a Consent Aid for Carpal Tunnel Release

Artificial Intelligence as a Consent Aid for Carpal Tunnel Release
Original Research
James Brock, Richard Roberts, Matthew Horner, Preetham Kodumuri
Cureus, 24 June 2024
Open Access
Abstract
Background
Hand surgeons have been charged with the use of diverse modalities to enhance the consenting process following the Montgomery ruling. Artificial Intelligence language models have been suggested as patient education tools that may aid consent.
Methods
We compared the quality and readability of the Every Informed Decision Online (EIDO) patient information leaflet for carpal tunnel release with the artificial intelligence language model Chat Generative Pretrained Transformer (GPT).
Results
The quality of information by ChatGPT was significantly higher using the DISCERN score, 71/80 for ChatGPT compared to 62/80 for EIDO (p=0.014). DISCERN interrater observer reliability was high (0.65) using the kappa statistic. Flesch-Kincaid readability scoring was 12.3 for ChatGPT and 7.5 for EIDO, suggesting a more complex reading age for the ChatGPT information.
Conclusion
The artificial intelligence language model ChatGPT produces high-quality information at the expense of readability when compared to EIDO information leaflets for carpal tunnel release consent.

Safe and Equitable Pediatric Clinical Use of AI

Safe and Equitable Pediatric Clinical Use of AI
Viewpoint
Jessica L. Handley, Christoph U. Lehmann, Raj M. Ratwani
JAMA Pediatrics, 13 May 2024; 178(7) pp 637-638
Excerpt
Use of artificial intelligence (AI) in pediatric clinical settings has the potential to improve diagnosis, treatment, and quality of care. However, most pediatric AI products tend to be in an early stage—mainly to predict risks using patient data (eg, kidney injury, clinical deterioration, and mortality). AI may also lead to unintended patient safety and equity issues harmful to children. US President Biden’s October 2023 AI executive order calls for an AI safety framework. As guidelines, standards, and policies are formulated to guide safe and equitable AI use, the application of AI in pediatrics must be recognized as imbued with distinctly different risks and mitigation needs for children than in adults…

Co-creating Consent for Data Use — AI-Powered Ethics for Biomedical AI

Co-creating Consent for Data Use — AI-Powered Ethics for Biomedical AI
Barbara J. Evans, Azra Bihorac
New England Journal of Medicine, 14 June 2024
Abstract
As nations design regulatory frameworks for medical AI, research and pilot projects are urgently needed to harness AI as a tool to enhance today’s regulatory and ethical oversight processes. Under pressure to regulate AI, policy makers may think it expedient to repurpose existing regulatory institutions to tackle the novel challenges AI presents. However, the profusion of new AI applications in biomedicine — combined with the scope, scale, complexity, and pace of innovation — threatens to overwhelm human regulators, diminishing public trust and inviting backlash. This article explores the challenge of protecting privacy while ensuring access to large, inclusive data resources to fuel safe, effective, and equitable medical AI. Informed consent for data use, as conceived in the 1970s, seems dead, and it cannot ensure strong privacy protection in today’s large-scale data environments. Informed consent has an ongoing role but must evolve to nurture privacy, equity, and trust. It is crucial to develop and test alternative solutions, including those using AI itself, to help human regulators oversee safe, ethical use of biomedical AI and give people a voice in co-creating privacy standards that might make them comfortable contributing their data. Biomedical AI demands AI-powered oversight processes that let ethicists and regulators hear directly and at scale from the public they are trying to protect. Nations are not yet investing in AI tools to enhance human oversight of AI. Without such investments, there is a rush toward a future in which AI assists everyone except regulators and bioethicists, leaving them behind.

Spotlight Section

This month we would like to spotlight two articles focused on the applications of artificial intelligence technologies in healthcare. In an article in Machine Learning and Knowledge Extraction – Evaluation of AI ChatBots for the Creation of Patient-Informed Consent Sheets – Raimann et al. assessed the ability of large language models (LLMs) to generate information sheets for six basic anesthesiologic procedures.The authors found that that the three LLMs tested fulfilled less than 50% of the predetermined requirements for a satisfactory and compliant information sheet. They also found that the descriptions of key elements such as risks and documentation regarding consultation varied. The authors assess that LLMs have “clear limitations” in generating patient information sheets.

Park addresses the patient perspective on AI use in healthcare provision in the Digital Health article – Patient perspectives on informed consent for medical AI: A web-based experiment. Through this work Park adds a new voice to the debate about whether, when using LLMs as a decision aid, healthcare providers ought to disclose this to the patients. It was found that patients trust second opinions from other physicians more than an AI diagnosis, but as the risk level increased for procedures, as did the importance of AI generated information. This study found the disclosure of AI use in diagnosis to be necessary from a patient perspective.

The Center for Informed Consent Integrity is exploring use of ChatGPT 4.0 to analyze the informed consent landscape, specifically how the 60+ editions of this digest might function as a specific content base for inquiry. We have encountered some limitations given our purpose, albeit different than those found by the authors below. We will continue to explore generative AI to strengthen our work and keep our readers updated.

Evaluation of AI ChatBots for the Creation of Patient-Informed Consent Sheets
Florian Jürgen Raimann, Vanessa Neef, Marie Charlotte Hennighausen, Kai Zacharowski, Armin Niklas Flinspach
Machine Learning and Knowledge Extraction, 24 May 2024
Abstract
Introduction
Large language models (LLMs), such as ChatGPT, are a topic of major public interest, and their potential benefits and threats are a subject of discussion. The potential contribution of these models to health care is widely discussed. However, few studies to date have examined LLMs. For example, the potential use of LLMs in (individualized) informed consent remains unclear.
Methods
We analyzed the performance of the LLMs ChatGPT 3.5, ChatGPT 4.0, and Gemini with regard to their ability to create an information sheet for six basic anesthesiologic procedures in response to corresponding questions. We performed multiple attempts to create forms for anesthesia and analyzed the results checklists based on existing standard sheets.
Results
None of the LLMs tested were able to create a legally compliant information sheet for any basic anesthesiologic procedure. Overall, fewer than one-third of the risks, procedural descriptions, and preparations listed were covered by the LLMs.
Conclusions
There are clear limitations of current LLMs in terms of practical application. Advantages in the generation of patient-adapted risk stratification within individual informed consent forms are not available at the moment, although the potential for further development is difficult to predict. 

Patient perspectives on informed consent for medical AI: A web-based experiment
Hai Jin Park
Digital Health, 30 April 2024
Abstract
Objective
Despite the increasing use of AI applications as a clinical decision support tool in healthcare, patients are often unaware of their use in the physician’s decision-making process. This study aims to determine whether doctors should disclose the use of AI tools in diagnosis and what kind of information should be provided.
Methods
A survey experiment with 1000 respondents in South Korea was conducted to estimate the patients’ perceived importance of information regarding the use of an AI tool in diagnosis in deciding whether to receive the treatment.
Results
The study found that the use of an AI tool increases the perceived importance of information related to its use, compared with when a physician consults with a human radiologist. Information regarding the AI tool when AI is used was perceived by participants either as more important than or similar to the regularly disclosed information regarding short-term effects when AI is not used. Further analysis revealed that gender, age, and income have a statistically significant effect on the perceived importance of every piece of AI information.
Conclusions
This study supports the disclosure of AI use in diagnosis during the informed consent process. However, the disclosure should be tailored to the individual patient’s needs, as patient preferences for information regarding AI use vary across gender, age and income levels. It is recommended that ethical guidelines be developed for informed consent when using AI in diagnoses that go beyond mere legal requirements.