Beyond Disclosure: Rethinking Patient Consent and AI Accountability in Healthcare

Beyond Disclosure: Rethinking Patient Consent and AI Accountability in Healthcare
Open Peer Commentaries
Tony Yang
The American Journal of Bioethics, 24 February 2025
Excerpt
    The growing integration of artificial intelligence (AI) into healthcare raises fundamental questions about patient consent, autonomy, and trust. The concept of a “Right to Notice and Explanation” (RN&E) articulated in Hurley et al.’s work highlights an essential ethical obligation: ensuring that patients are aware of AI’s role in their care and understand its implications (Hurley et al. 2025). While this framework is compelling, it can be further strengthened by situating it within broader regulatory, ethical, and operational contexts. Insights from recent regulatory developments, systematic reviews, and calls for responsible AI highlight the need to reconceptualize RN&E as a dynamic, participatory, and institutionally embedded process (Kleinberg et al. Citation2018).
One of the key gaps in current discussions on RN&E is the lack of clarity regarding what constitutes sufficient “notice” and “explanation” in practice (Hurley et al. 2025). While it is clear that patients should be informed when AI influences their care, it remains less clear how that information should be conveyed and at what level of technical detail. Explanations must balance transparency with simplicity, ensuring that patients understand the role of AI without being overwhelmed by technical complexity (Siala and Wang 2022). This can be achieved by designing explanations with a human-centered approach, focusing on health literacy and contextual relevance. For instance, AI explanations should be aligned with existing health literacy practices, leveraging the principles of plain language and clear communication to ensure patient comprehension (Sørensen et al. 2012). By embedding these elements into health system workflows, RN&E can move beyond a static “disclosure” model to become a participatory process in which patients are not only informed but also empowered to ask questions, seek clarification, and participate in decisions regarding their care…

Community-Based Consent Model, Patient Rights, and AI Explainability in Medicine

Community-Based Consent Model, Patient Rights, and AI Explainability in Medicine
Open Peer Commentaries
Aorigele Bao, Yi Zeng
The American Journal of Bioethics, 24 February 2025
Excerpt
…Considering the explainability issue in healthcare, we believe that a community-based consent model should be considered to supplement the notification and right to explanation model in healthcare. Here, the community-based consent model refers to the promotion of the protection of patients’ rights to notification and explanation in the field of healthcare through the construction and maintenance of a community with diverse representatives. Specifically, the community-based consent model can be understood as a hybrid representative community composed of patient representatives, patient advocacy groups, disease-specific societies, or local healthcare structures that intervene as a supplementary reference in the process of using artificial intelligence for healthcare practices during the early development and clinical use of artificial intelligence tools for healthcare, ensuring patients’ rights to notice and explanation…

A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent

A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent
Open Peer Commentaries
Matthew Elmore, Nicoleta Economou-Zavlanos, Michael Pencina
The American Journal of Bioethics, 24 February 2025
Excerpt
    The principle of respect for autonomy, often expressed as the right to understand and make decisions about one’s care, has recently gained attention in AI-related bioethics. Hurley et al. (2025) have made an important contribution by examining the Whitehouse’s Blueprint for an AI Bill of Rights, asking how the right to notice and explanation might apply in healthcare contexts. They propose three possible functions for this right in patient care: (1) to provide a simple “FYI” to patients about the use of AI; (2) to foster education and trust; and (3) to serve as part of a patient’s right to informed consent.
This commentary offers a heuristic for determining how best to plot these three aims (Table 1). Simplifying the recent work of Rose and Shapiro (2024), our heuristic lays out the functions described by Hurley et al. on a four-quadrant grid, scaling notification practices along two axes: the degree of AI autonomy and the degree of clinical risk. The need for robust consent increases when clinical risk is higher and when AI has greater autonomy in decision-making. This heuristic is adaptable to institution-specific measures of clinical risk, and it also provides flexibility for institutions to address their unique workflows…

Clarifying When Consent Might Be Illusory in Notice and Explanation Rights

Clarifying When Consent Might Be Illusory in Notice and Explanation Rights
Bryan Pilkington, Charles E. Binkley
The American Journal of Bioethics, 24 February 2025
Excerpt
In “Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care,” Hurley et al. (2025) helpfully summarize key features of the AI Bill of Rights, focusing on the right to notice and explanation (RNandE) and arguing that greater clarity, including a specified goal, is needed. Though we concur with their overall assessment and appreciate the authors’ reliance on our work (Binkley and Pilkington 2023a) on consent, further clarification is needed, as some of the nuance of our position appears to have been lost in their goal categorizations. In exploring the normative function of RNandE, the authors ask what RNandE “is meant to achieve and how (and why) it is morally important at the individual patient level in healthcare?” We hold that for the three categories of use cases that the authors reference (chatbot, diagnostic and prognostic), the benefit to patients in claiming a RNandE of AI systems in their healthcare is not that AI models per se will be used. Rather the benefit to patients is the RNandE about how the output of the AI systems (chats, diagnoses, prognoses) will be used in their care, how patients will be informed, and with whom, besides patients and the clinicians involved in their care, the information will be shared.

Consent for genomic sequencing: a conversation, not just a form

Consent for genomic sequencing: a conversation, not just a form
Comment
Danya F. Vears
European Journal of Human Genetics, 3 March 2025
Open Access
Excerpt
How to obtain truly informed consent for genomic sequencing in clinical practice is a long-standing topic of debate in the field. Although some research (including interviews with health professionals and analysis of consent forms) has already been conducted in this space, these studies can often only give a vague indication of the nature of the information that patients or families might be provided. They are limited in that they assume that health professionals use the consent form as a template to guide the discussion with the patient when, in reality, very little research has been conducted that examines exactly how, if at all, consent forms are utilised in these consultations. The study conducted by Ellard et al. [1] goes a step further than most to explore what happens during a consent interaction when patients are offered diagnostic genomic sequencing within a public health service. Their team audio-recorded consent conversations between healthcare professionals and parents of paediatric patients offered WGS through the Genomic Medicine Service across seven regionally diverse NHS Trusts in the United Kingdom. However, it also raises further questions, including how to make a consent interaction into a legitimate conversation and what information is important to convey during this conversation…

Ethics of Digitalization and Artificial Intelligence in Mental Healthcare

Ethics of Digitalization and Artificial Intelligence in Mental Healthcare
Book Chapter
Emanuel Schwarz, Andreas Meyer-Lindenberg
Ethics in Psychiatry, 20 March 2025 [Springer]
Abstract
Digitalization in healthcare encompasses a broad spectrum of rapidly advancing technological developments, aimed at improving the effectiveness and quality of medical care. This shift in medical care provisioning gives rise to several ethical challenges that need to be successfully addressed to ensure the responsible clinical implementation of digital technologies, and of the associated research. This chapter provides an overview of these challenges across different technological fields relevant for healthcare digitalization, with particular focus on the aspects of pronounced relevance for mental health applications. It explores ethical considerations relevant to electronic health record systems and other data platforms, which are a cornerstone for the future development of healthcare solutions, such as through machine learning technology. We discuss important questions regarding informed consent and data reuse, as well as aspects relevant for digital mental health interventions. The chapter then focuses on mobile technology and data analytics, with particular emphasis on precision medicine approaches. Finally, digital twins are discussed as an upcoming technological development that brings about several unique ethical challenges. Addressing the ethical challenges of digital healthcare applications will be the basis for their responsible development and use, and will aid in building trust by different stakeholders that is fundamental for clinical care and future research.

Managing legal risks in health information exchanges: A comprehensive approach to privacy, consent, and liability

Managing legal risks in health information exchanges: A comprehensive approach to privacy, consent, and liability
Tariq K Alhasan
Journal of Healthcare Risk Management, 4 March 2025
Abstract
Health Information Exchanges (HIEs) are revolutionizing healthcare by facilitating secure and timely patient data sharing across diverse organizations. However, their rapid expansion has introduced significant legal and ethical challenges, particularly regarding privacy, informed consent, and liability risks. This paper critically assesses the effectiveness of existing legal frameworks, including Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR), in addressing these challenges, revealing gaps in their application within HIEs. It argues that current consent models fail to provide meaningful control for patients, while privacy protections are weakened by issues such as re-identification and jurisdictional inconsistencies. Moreover, liability in data breaches remains complex due to ambiguous responsibility among stakeholders. The study concludes that reforms are needed, including dynamic consent models, standardized liability frameworks, and enhanced data governance structures, to ensure secure, ethical, and effective data sharing. These changes are essential to fostering patient trust, improving healthcare delivery, and aligning with Sustainable Development Goal (SDG) 3-ensuring healthy lives and promoting well-being for all.

Editor’s note: we recognise that the proposals in this article are at odds with a number of regulatory structures including GDPR and HIPAA.

Enhancing patient autonomy in data ownership: privacy models and consent frameworks for healthcare

Enhancing patient autonomy in data ownership: privacy models and consent frameworks for healthcare
Review Article
Minal R. Narkhede, Nilesh I. Wankhede, Akanksha M. Kamble
Journal of Digital Health, 3 March 2025
Abstract
Patient autonomy in healthcare has become increasingly significant in the digital age as individuals seek greater control over their health data. This review examines the ethical, legal and technological aspects of patient data ownership, emphasizing the need for privacy models and consent frameworks to empower patients, safeguard privacy and enhance transparency. Traditional doctor-patient confidentiality faces challenges due to advancements such as electronic health records, artificial intelligence and wearable technologies, necessitating updated frameworks to protect patient rights. Privacy models such as private, public and hybrid models present varying implications for data control, security and societal benefits. Emerging technologies such as blockchain and AI are revolutionizing data privacy by decentralizing data storage and enabling patient control while ensuring secure and ethical data utilization. Advanced consent frameworks, including dynamic and granular consent, provide patients with flexibility and transparency and promote trust and active participation in data-sharing decisions. Real-world implementations, such as Australia’s My Health Record and Estonia’s e-Health system, demonstrate the potential of patient-centric privacy frameworks to enhance healthcare quality and innovation. However, significant challenges persist, including regulatory ambiguities, cybersecurity risks and gaps in digital literacy. Addressing these issues requires collaboration among stakeholders to develop adaptable, secure and interoperable systems that prioritize patient autonomy. By integrating patient education, fostering interoperability and leveraging adaptive technologies, healthcare systems can balance privacy and innovation, build trust and ensure ethical data practices that empower individuals while advancing public health objectives.

Striking the Balance: Genomic Data, Consent and Altruism in the European Health Data Space

Striking the Balance: Genomic Data, Consent and Altruism in the European Health Data Space
Book Chapter
Eila El Asry, Juli Mansnérus, Sandra Liede
The European Health Data Space, 2025 [Taylor & Francis]
Abstract
The Data Governance Act (DGA) defines data altruism as sharing of data for purposes of general interest without seeking or receiving reward. The consent of the data subject is required if personal data is shared within the context of data altruism. Simultaneously, the European Health Data Space (EHDS) sets as one of its objectives to ensure a consistent and efficient framework for the secondary use of health data for the purposes of, inter alia, research, innovation and regulatory activities, thus at least partly sharing common goals with the concept of data altruism. The sharing of health data under the EHDS for secondary use is however not in principle based on the individual’s consent, though the final text includes an opt-out mechanism. This chapter discusses the compatibility of and relation between the data subject consent requirements in the DGA, GDPR and EHDS. While there is huge potential in the advanced use of genomic data for innovative biomedical research, advanced analytics and access to digitised and personalised healthcare, genetic data is inherently sensitive. Balancing the data subjects’ self-determination rights with the critical need for access to valuable data that can potentially save lives presents a significant challenge in this context.

Accelerating implementation of visual key information to improve informed consent in research: a single-institution feasibility study and implementation testing

Accelerating implementation of visual key information to improve informed consent in research: a single-institution feasibility study and implementation testing
Angela Hill, Ashley J Housten, Krista Cooksey, Eliana Goldstein, Jessica Mozersky, Mary C Politi
BMJ Open, 18 March 2025
Abstract
Objective
Current consent processes often fail to communicate study information effectively and may lead to disparities in study participation. The 2018 Common Rule introduced a mandatory key information (KI) section as a means of improving consents; however, it frequently remains lengthy and prohibitively complex. We conducted a feasibility study of an accessible visual KI template for use in routine studies.
Design
Parallel feasibility study and implementation testing.
Setting
Single Midwestern US academic centre, between July 2023 and July 2024.
Participants
To develop and implement the visual KI template, we used rapid implementation science methods and recruited decision-making and clinical experts, patients and community partners to iteratively adapt the KI template. To assess its efficacy, we surveyed patient participants eligible to enrol in one of four clinical trials that used the visual KI template as part of informed consent.
Primary and secondary outcome measures
The primary outcome was participant knowledge about clinical trial details. Secondary outcomes included decisional conflict about joining the trial (validated SURE measure), KI template acceptability (validated Acceptability of Intervention Measure) and perceived self-efficacy communicating about trial details with researchers/clinicians (items adapted from the Perceived Efficacy in Patient/Physician Interaction measure). Feasibility was evaluated based on reach, number of modifications needed to tailor the intervention to each pilot trial, and time required for ethics reviews.
Results
Of 85 study participants across the four clinical trials using the visual KI page, the weighted mean knowledge score about trial details was 87.4% correct (range 77.8%-88.9%). Few (n=9; 10.6%) reported decisional conflict about whether to participate. Almost all (n=82; 96.5%) participants stated they approve using the visual KI template. 79 (92.9%) participants reported feeling confident asking clinicians or researchers questions about the trial.
Conclusions
Visual KI templates can improve potential participant comprehension and in doing so, may reduce barriers to participation in research. Parallel feasibility studies and implementation science methods can facilitate the rapid development and evaluation of evidence-based interventions, such as improved informed consent templates.