Ethical and legal challenges of medical AI on informed consent: China as an example

Ethical and legal challenges of medical AI on informed consent: China as an example
Wang Y, Ma Z
Developing World Bioethics, 19 January 2024
Abstract
The escalating integration of Artificial Intelligence (AI) in clinical settings carries profound implications for the doctrine of informed consent, presenting challenges that necessitate immediate attention. China, in its advancement in the deployment of medical AI, is proactively engaging in the formulation of legal and ethical regulations. This paper takes China as an example to undertake a theoretical examination rooted in the principles of medical ethics and legal norms, analyzing informed consent and medical AI through relevant literature data. The study reveals that medical AI poses fundamental challenges to the accuracy, adequacy, and objectivity of information disclosed by doctors, alongside impacting patient competency and willingness to give consent. To enhance adherence to informed consent rules in the context of medical AI, this paper advocates for a shift towards a patient-centric information disclosure standard, the restructuring of medical liability rules, the augmentation of professional training, and the advancement of public understanding through educational initiatives.

Using ChatGPT to Facilitate Truly Informed Medical Consent

Using ChatGPT to Facilitate Truly Informed Medical Consent
Fatima N. Mirza, Oliver Y. Tang, Ian D. Connolly, Hael A. Abdulrazeq, Rachel K. Lim, G. Dean Roye, Cedric Priebe, Cheryl Chandler, Tiffany J. Libby, Michael W. Groff, John H. Shin, Albert E. Telfeian, Curtis E. Doberstein, Wael F. Asaad, Ziya L. Gokaslan, James Zou, Rohaid Ali
New England Journal of Medicine AI, 10 January 2024
Abstract
Informed consent is integral to the practice of medicine. Most informed consent documents are written at a reading level that surpasses the reading comprehension level of the average American. Large language models, a type of artificial intelligence (AI) with the ability to summarize and revise content, present a novel opportunity to make the language used in consent forms more accessible to the average American and thus, improve the quality of informed consent. In this study, we present the experience of the largest health care system in the state of Rhode Island in implementing AI to improve the readability of informed consent documents, highlighting one tangible application for emerging AI in the clinical setting.

From Code to Care and Navigating Ethical Challenges in AI Healthcare

From Code to Care and Navigating Ethical Challenges in AI Healthcare
Book Chapter
Sourav Madhur Dey, Pushan Kumar Dutta
Human-Centered Approaches in Industry 5.0: Human-Machine Interaction, Virtual Reality Training, and Customer Sentiment Analysis, 2024 [IGI Global]
Abstract
Artificial intelligence (AI) has become a transformative force in the healthcare industry, offering unprecedented opportunities for improved diagnostics, patient treatment, and outcomes. However, its integration into healthcare systems has also brought to light a host of ethical concerns that require careful scrutiny. This chapter delves into the intricate nexus of ethics and AI in healthcare, shedding light on the multifaceted implications and challenges that arise. AI technologies such as machine learning (ML) and data analytics (DS) have immense potential to revolutionize healthcare. They can enhance diagnostic accuracy, enable the treatment of a larger number of patients, and improve patient outcomes. However, their implementation is not without ethical quandaries. These primarily revolve around data privacy, bias mitigation, transparency, responsibility, and patient independence. Transparency and interpretability are other essential aspects of the ethical discourse surrounding AI in healthcare.

Generating Informed Consent Documents Related to Blepharoplasty Using ChatGPT

Generating Informed Consent Documents Related to Blepharoplasty Using ChatGPT
Original Investigation
Makoto Shiraishi, Yoko Tomioka, Ami Miyakuni, Yuta Moriwaki, Rui Yang, Jun Oba, Mutsumi Okazaki
Ophthalmic Plastic and Reconstructive Surgery, 19 December 2023
Abstract
Purpose
This study aimed to demonstrate the performance of the popular artificial intelligence (AI) language model, Chat Generative Pre-trained Transformer (ChatGPT) (OpenAI, San Francisco, CA, U.S.A.), in generating the informed consent (IC) document of blepharoplasty.
Methods
A total of 2 prompts were provided to ChatGPT to generate IC documents. Four board-certified plastic surgeons and 4 nonmedical staff members evaluated the AI-generated IC documents and the original IC document currently used in the clinical setting. They assessed these documents in terms of accuracy, informativeness, and accessibility.
Results
Among board-certified plastic surgeons, the initial AI-generated IC document scored significantly lower than the original IC document in accuracy (p < 0.001), informativeness (p = 0.005), and accessibility (p = 0.021), while the revised AI-generated IC document scored lower compared with the original document in accuracy (p = 0.03) and accessibility (p = 0.021). Among nonmedical staff members, no statistical significance of 2 AI-generated IC documents was observed compared with the original document in terms of accuracy, informativeness, and accessibility.
Conclusions
Our results showed that current ChatGPT cannot be used as a distinct patient education resource. However, it has the potential to make better IC documents when improving the professional terminology. This AI technology will eventually transform ophthalmic plastic surgery healthcare systematics by enhancing patient education and decision-making via IC documents.

Comparison of artificial intelligence-assisted informed consent obtained before coronary angiography with the conventional method: Medical competence and ethical assessment

Comparison of artificial intelligence-assisted informed consent obtained before coronary angiography with the conventional method: Medical competence and ethical assessment
Fatih Aydin, Özge Turgay Yildirim, Ayse Huseyinoglu Aydin, Bektas Murat, Cem Hakan Basaran
Digital Health, 30 November 2023
Abstract
Objective
At the time of informed consent (IC) for coronary angiography (CAG), patients’ knowledge of the process is inadequate. Time constraints and a lack of personalization of consent are the primary causes of inadequate information. This procedure can be enhanced by obtaining IC using a chatbot powered by artificial intelligence (AI).
Methods
In the study, patients who will undergo CAG for the first time were randomly divided into two groups, and IC was given to one group using the conventional method and the other group using an AI-supported chatbot, chatGPT3. They were then evaluated with two distinct questionnaires measuring their satisfaction and capacity to understand CAG risks.
Results
While the satisfaction questionnaire was equal between the two groups (p = 0.581), the correct understanding of CAG risk questionnaire was found to be significantly higher in the AI group (<0.001).
Conclusions
AI can be trained to support clinicians in giving IC before CAG. In this way, the workload of healthcare professionals can be reduced while providing a better IC.

Informed consent for artificial intelligence in emergency medicine: A practical guide

Informed consent for artificial intelligence in emergency medicine: A practical guide
Kenneth V. Iserson
The American Journal of Emergency Medicine, 25 November 2023
Abstract
   As artificial intelligence (AI) expands its presence in healthcare, particularly within emergency medicine (EM), there is growing urgency to explore the ethical and practical considerations surrounding its adoption.
     AI holds the potential to revolutionize how emergency physicians (EPs) make clinical decisions, but AI’s complexity often surpasses EPs’ capacity to provide patients with informed consent regarding its use. This article underscores the crucial need to address the ethical pitfalls of AI in EM. Patient autonomy necessitates that EPs engage in conversations with patients about whether to use AI in their evaluation and treatment. As clinical AI integration expands, this discussion should become an integral part of the informed consent process, aligning with ethical and legal requirements.
The rapid availability of AI programs, fueled by vast electronic health record (EHR) datasets, has led to increased pressure on hospitals and clinicians to embrace clinical AI without comprehensive system evaluation. However, the evolving landscape of AI technology outpaces our ability to anticipate its impact on medical practice and patient care. The central question arises: Are EPs equipped with the necessary knowledge to offer well-informed consent regarding clinical AI? Collaborative efforts between EPs, bioethicists, AI researchers, and healthcare administrators are essential for the development and implementation of optimal AI practices in EM.
To facilitate informed consent about AI, EPs should understand at least five key areas: (1) how AI systems operate; (2) whether AI systems are understandable and trustworthy; (3) the limitations of and errors AI systems make; (4) how disagreements between the EP and AI are resolved; (5) whether the patient’s personally identifiable information (PII) and the AI computer systems will be secure; (4) if the AI system functions reliably (has been validated); and (5) if the AI program exhibits bias. This article addresses each of these critical issues, aiming to empower EPs with the knowledge required to navigate the intersection of AI and informed consent in EM.