Narrative Transparency in AI-Driven Consent

Narrative Transparency in AI-Driven Consent
Open Peer Commentaries
Jarrel De Matas, Jiefei Wang, Vibhuti Gupta
American Journal of Bioethics, 7 April 2025
Excerpt
As artificial intelligence (AI) systems become more prevalent, ethical inquiry into transparency, trust, and patient autonomy must develop with similar pace. One area where such inquiry required is in the process of obtaining informed consent, particularly in a biobanking context, where participants are asked to share their biological data for research purposes. Although Barnes et al. (Citation2025) proposes using blockchain and AI to improve transparency and engagement in biobanking through demonstrated consent, their approach lacks a concrete framework: informed consent should not only be considered a transactional process, as Manson and O’Neill (Citation2007) argue, but more importantly a user-centered, communicative act that requires participants to understand complex information, balance risks and benefits, and make decisions that overlap with their values and preferences. To complement what we identify in Barnes et al. (Citation2025) as an overstatement of the transactional approach to informed consent, we suggest a Narrative Transparency Framework. This framework applies storytelling principles to drive AI-assisted consent processes and aims to improve decision-making, enhance understanding, and foster trust by enhancing personalized, ethically framed, and user-adaptive narratives. In this paper, we explore the theoretical basis of narrative transparency which is premised on the role of narrative structure in shaping participant understanding and decision-making. We also outline the components of the Narrative Transparency Framework and discuss practical strategies for utilizing narrative-driven AI consent interactions…

Consent Is Dead, Long Live Ethical Oversight: Integrating Ethically Sourced Data into Demonstrated Consent Models

Consent Is Dead, Long Live Ethical Oversight: Integrating Ethically Sourced Data into Demonstrated Consent Models
Open Peer Commentaries
Jean-Christophe Bélisle-Pipon, Vardit Ravitsky
American Journal of Bioethics, 7 April 2025
Excerpt
Barnes et al. (Citation2025) propose a demonstrated consent model that seeks to address challenges in modern biomedicine by transforming consent from a static, one-time transaction into a dynamic process. Their model integrates blockchain technology with generative artificial intelligence (AI) to allow donors to monitor the use of their biological samples in real time and adjust their preferences as research evolves. This approach helps to respond to the limitations of traditional consent frameworks—a concern echoed by Evans and Bihorac (Citation2024), who note that “informed consent for data use, as conceived in the 1970s, seems dead.” They argue that modern computational methods introduce privacy risks not only through direct data breaches but also via inferences drawn from aggregated data, affecting even those who have not directly consented. Barnes and colleagues’ model embeds increased transparency and user agency into consent processes. However, it also raises ethical questions: Does this approach truly empower donors, or might it overwhelm them with technical complexity? Can blockchain’s transparency and AI’s capacity to personalize consent overcome systemic inequities, or will they obscure deeper structural imbalances? These questions are essential to assessing whether demonstrated consent can adequately safeguard autonomy, privacy, and justice in biomedical research…

Informed Consent: An Essential Tool for Medical Practice and Research

Informed Consent: An Essential Tool for Medical Practice and Research
Review Article
Sukhvinder Singh Oberoi, Nilima Sharma, Sweta Rastogi, Sunil Kumar, Anand Suresh
Amrita Journal of Medicine, April-June 2025
Abstract
The concept of informed consent regulates the relationship between medical practice and patients, promoting human rights and dignity. It serves as both a legal and ethical mechanism for ensuring autonomy and self-determination. This review examines the concept of informed consent as it applies to both medical research and clinical practice, addressing its types, prerequisites, limitations, and challenges. Additionally, it explores waiver of consent and the concept of minimal risk in research. The review highlights the importance of shared decision-making (SDM), the barriers to informed consent, and the role of comprehension in the consent process. The discussion emphasizes the need for improvements in informed consent procedures, particularly in enhancing patient understanding and addressing legal and ethical gaps. Future research should focus on refining consent mechanisms to improve their effectiveness in modern healthcare.

AI meets informed consent: a new era for clinical trial communication

AI meets informed consent: a new era for clinical trial communication
Michael Waters
JNCI Cancer Spectrum, 18 March 2025
Abstract
    Clinical trials are fundamental to evidence-based medicine, providing patients with access to novel therapeutics and advancing scientific knowledge. However, patient comprehension of trial information remains a critical challenge, as registries like ClinicalTrials.gov often present complex medical jargon that is difficult for the general public to understand. While initiatives such as plain-language summaries and multimedia interventions have attempted to improve accessibility, scalable and personalized solutions remain elusive.
This study explores the potential of Large Language Models (LLMs), specifically GPT-4, to enhance patient education regarding cancer clinical trials. By leveraging informed consent forms (ICFs) from ClinicalTrials.gov, the researchers evaluated two AI-driven approaches—direct summarization and sequential summarization—to generate patient-friendly summaries. Additionally, the study assessed the capability of LLMs to create multiple-choice question-answer pairs (MCQAs) to gauge patient understanding. Findings demonstrate that AI-generated summaries significantly improved readability, with sequential summarization yielding higher accuracy and completeness. MCQAs showed high concordance with human-annotated responses, and over 80% of surveyed participants reported enhanced understanding of the authors in-house BROADBAND trial.
While LLMs hold promise in transforming patient engagement through improved accessibility of clinical trial information, concerns regarding AI hallucinations, accuracy, and ethical considerations remain. Future research should focus on refining AI-driven workflows, integrating patient feedback, and ensuring regulatory oversight. Addressing these challenges could enable LLMs to play a pivotal role in bridging gaps in clinical trial communication, ultimately improving patient comprehension and participation.

Artificial intelligence in educational games and consent under general data protection regulation

Artificial intelligence in educational games and consent under general data protection regulation
Eirini Mougiakou, Spyros Papadimitriou, Konstantina Chrysafiadi, Maria Virvou
Intelligent Decision Technologies, 18 March 2025
Abstract
As Artificial Intelligence becomes increasingly integrated into educational games, conforming with the General Data Protection Regulation (GDPR)—a legal framework governing data protection and privacy in the European Union—remains an important yet complex challenge, particularly when minors are involved. Users are required to provide consent multiple times, often unexpectedly, at different game levels. This process is further complicated by the varying durations for which consent remains valid. As a result, users—especially minors—may become confused about the consent they have given. Additional concerns arise when the educational game is AI-equipped. If AI is not involved, no new data is generated. However, if AI is present, new data is continuously produced, necessitating ongoing consent. For example, a user may consent to personalisation, which could lead the game to categorise them in unintended ways, such as labelling them a ‘poor student’. This paper explores GDPR challenges in AI-empowered educational games, focusing on user consent, AI-inferred data, and compliance gaps. Intelligent educational games rely on adaptive decision-making algorithms to personalise learning experiences, making them a subset of Intelligent Decision Technologies. Our research is based on a fuzzy-based educational game developed as a testbed for studying GDPR compliance in AI-driven decision-making. The findings provide insights into ethical AI governance, dynamic consent management, and the intersection of regulatory compliance with adaptive, data-driven decision systems in intelligent educational technologies. Based on our research, not all personal data exist from the beginning and upon original consent granting, as personal data are also generated throughout the process.

Editor’s note: we recognise that the proposals in this article are at odds with a number of regulatory structures.

Patient consent for the secondary use of health data in artificial intelligence (AI) models: A scoping review

Patient consent for the secondary use of health data in artificial intelligence (AI) models: A scoping review
Khadijeh Moulaei, Saeed Akhlaghpour, Farhad Fatehi
International Journal of Medical Informatics, 8 March 2025
Abstract
Background
The secondary use of health data for training Artificial Intelligence (AI) models holds immense potential for advancing medical research and healthcare delivery. However, ensuring patient consent for such utilization is paramount to uphold ethical standards and data privacy. Patient informed consent means patients are fully informed about how their data will be collected, used, and protected, and they voluntarily agree to allow their data to be used for AI models. In addition to formal consent frameworks, establishing a social license is critical to foster public trust and societal acceptance for the secondary use of health data in AI systems. This study examines patient consent practices in this domain.
Method
In this scoping review, we searched Web of Science, PubMed, and Scopus. We included studies in English that addressed the core issues of interest, namely, privacy, security, legal, and ethical issues related to the secondary use of health data in AI models. Articles not addressing the core issues, as well as systematic reviews, meta-analyses, books, letters, conference abstracts, and study protocols were excluded. Two authors independently screened titles, abstracts, and full texts, resolving disagreements with a third author. Data was extracted using a data extraction form.
Results
After screening 774 articles, a total of 38 articles were ultimately included in the review. Across these studies, a total of 178 barriers and 193 facilitators were identified. We consolidated similar codes and extracted 65 barriers and 101 facilitators, which we then categorized into four themes: “Structure,” “People,” “Physical system,” and “Task.” We identified notable emphasis on “Legal and Ethical Challenges” and “Interoperability and Data Governance.” Key barriers included concerns over privacy and security breaches, inadequacies in informed consent processes, and unauthorized data sharing. Critical facilitators included enhancing patient consent procedures, improving data privacy through anonymization, and promoting ethical standards for data usage.
Conclusion
Our study underscores the complexity of patient consent for the secondary use of health data in AI models, highlighting significant barriers and facilitators within legal, ethical, and technological domains. We recommend the development of specific guidelines and actionable strategies for policymakers, practitioners, and researchers to improve informed consent, ensuring privacy, trust, and ethical use of data, thereby facilitating the responsible advancement of AI in healthcare.

Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care

Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care
Meghan E Hurley, Benjamin H Lang, Kristin Marie Kostick-Quenet, Jared N Smith, Jennifer Blumenthal-Barby
The American Journal of Bioethics, March 2025
Abstract
Given the need for enforceable guardrails for artificial intelligence (AI) that protect the public and allow for innovation, the U.S. Government recently issued a Blueprint for an AI Bill of Rights which outlines five principles of safe AI design, use, and implementation. One in particular, the right to notice and explanation, requires accurately informing the public about the use of AI that impacts them in ways that are easy to understand. Yet, in the healthcare setting, it is unclear what goal the right to notice and explanation serves, and the moral importance of patient-level disclosure. We propose three normative functions of this right: (1) to notify patients about their care, (2) to educate patients and promote trust, and (3) to meet standards for informed consent. Additional clarity is needed to guide practices that respect the right to notice and explanation of AI in healthcare while providing meaningful benefits to patients.

Editor’s note: The following five articles are commentaries on this article which appeared in the American Journal of Bioethics.

Disclosure as Absolution in Medicine: Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence

Disclosure as Absolution in Medicine: Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence
Guest Editorial
Kayte Spector-Bagdady, Alex John London
The American Journal of Bioethics, 24 February 2025
Introduction
    The rush to deploy artificial intelligence (AI) and machine learning (ML) systems in medicine highlights the need for bioethics to deepen its normative engagement in disentangling autonomy from beneficence and justice in responsible medical practice. One of the reasons that informed consent is such a unique tool is its morally transformative nature. Actions that would otherwise be illegal or unethical are rendered permissible by the provision of free and informed consent. But consent is not a panacea to absolve all risks and burdens. The proliferation of AI/ML systems highlights that every additional call for disclosure warrants a deep introspection of goals, and of what values they reflect (Hurley et al. 2025).
For example, while informed consent might be appropriate when there is a choice whether to use an AI tool in clinical care, we cannot let deference to autonomy substitute for rigorous standards—based in beneficence and justice—that ensure the safe, effective, and equitable deployment of AI in medicine. Shortcomings in AI technologies that do not meet those standards cannot otherwise be absolved through the informed consent process. The assumption that patients are empowered to assess or alleviate such deficiencies is misguided. While much has been written about the inability of informed consent to bear its increasing transformative burden (Grady et al. 2017), further exploration of the appropriate division of moral labor between ethical values in the use of AI in clinical practice is warranted.

Beyond Disclosure: Rethinking Patient Consent and AI Accountability in Healthcare

Beyond Disclosure: Rethinking Patient Consent and AI Accountability in Healthcare
Open Peer Commentaries
Tony Yang
The American Journal of Bioethics, 24 February 2025
Excerpt
    The growing integration of artificial intelligence (AI) into healthcare raises fundamental questions about patient consent, autonomy, and trust. The concept of a “Right to Notice and Explanation” (RN&E) articulated in Hurley et al.’s work highlights an essential ethical obligation: ensuring that patients are aware of AI’s role in their care and understand its implications (Hurley et al. 2025). While this framework is compelling, it can be further strengthened by situating it within broader regulatory, ethical, and operational contexts. Insights from recent regulatory developments, systematic reviews, and calls for responsible AI highlight the need to reconceptualize RN&E as a dynamic, participatory, and institutionally embedded process (Kleinberg et al. Citation2018).
One of the key gaps in current discussions on RN&E is the lack of clarity regarding what constitutes sufficient “notice” and “explanation” in practice (Hurley et al. 2025). While it is clear that patients should be informed when AI influences their care, it remains less clear how that information should be conveyed and at what level of technical detail. Explanations must balance transparency with simplicity, ensuring that patients understand the role of AI without being overwhelmed by technical complexity (Siala and Wang 2022). This can be achieved by designing explanations with a human-centered approach, focusing on health literacy and contextual relevance. For instance, AI explanations should be aligned with existing health literacy practices, leveraging the principles of plain language and clear communication to ensure patient comprehension (Sørensen et al. 2012). By embedding these elements into health system workflows, RN&E can move beyond a static “disclosure” model to become a participatory process in which patients are not only informed but also empowered to ask questions, seek clarification, and participate in decisions regarding their care…

Community-Based Consent Model, Patient Rights, and AI Explainability in Medicine

Community-Based Consent Model, Patient Rights, and AI Explainability in Medicine
Open Peer Commentaries
Aorigele Bao, Yi Zeng
The American Journal of Bioethics, 24 February 2025
Excerpt
…Considering the explainability issue in healthcare, we believe that a community-based consent model should be considered to supplement the notification and right to explanation model in healthcare. Here, the community-based consent model refers to the promotion of the protection of patients’ rights to notification and explanation in the field of healthcare through the construction and maintenance of a community with diverse representatives. Specifically, the community-based consent model can be understood as a hybrid representative community composed of patient representatives, patient advocacy groups, disease-specific societies, or local healthcare structures that intervene as a supplementary reference in the process of using artificial intelligence for healthcare practices during the early development and clinical use of artificial intelligence tools for healthcare, ensuring patients’ rights to notice and explanation…