From black box to clarity: Strategies for effective AI informed consent in healthcare

From black box to clarity: Strategies for effective AI informed consent in healthcare

Research paper

Chau, M.G. Rahman, T. Debnath

Artificial Intelligence in Medicine, 24 May 2025

Abstract

Background

Informed consent is fundamental to ethical medical practice, ensuring that patients understand the procedures they undergo, the associated risks, and available alternatives. The advent of artificial intelligence (AI) in healthcare, particularly in diagnostics, introduces complexities that traditional informed consent forms do not adequately address. AI technologies, such as image analysis and decision-support systems, offer significant benefits but also raise ethical, legal, and practical concerns regarding patient information and autonomy.

Main body

The integration of AI in healthcare diagnostics necessitates a re-evaluation of current informed consent practices to ensure that patients are fully aware of AI’s role, capabilities, and limitations in their care. Existing standards, such as those in the UK’s National Health Service and the US, highlight the need for transparency and patient understanding but often fall short when applied to AI. The “black box” phenomenon, where the inner workings of AI systems are not transparent, poses a significant challenge. This lack of transparency can lead to over-reliance or distrust in AI tools by clinicians and patients alike. Additionally, the current informed consent process often fails to provide detailed explanations about AI algorithms, the data they use, and inherent biases. There is also a notable gap in the training and education of healthcare professionals on AI technologies, which impacts their ability to communicate effectively with patients. Ethical and legal considerations, including data privacy and algorithmic fairness, are frequently inadequately addressed in consent forms. Furthermore, integrating AI into clinical workflows presents practical challenges that require careful planning and robust support systems.

Conclusion

This review proposes strategies for redesigning informed consent forms. These include using plain language, visual aids, and personalised information to improve patient understanding and trust. Implementing continuous monitoring and feedback mechanisms can ensure the ongoing effectiveness of these forms. Future research should focus on developing comprehensive regulatory frameworks and enhancing communication techniques to convey complex AI concepts to patients. By improving informed consent practices, we can uphold ethical standards, foster patient trust, and support the responsible integration of AI in healthcare, ultimately benefiting both patients and healthcare providers.

“Does Black Box AI In Medicine Compromise Informed Consent?”

“Does Black Box AI In Medicine Compromise Informed Consent?”

Research Article

Samuel Director

Philosophy & Technology, 13 May 2025

Open Access

Abstract

Recently, there has been a large push for the use of artificial intelligence in medical settings. The promise of artificial intelligence (AI) in medicine is considerable, but its moral implications are insufficiently examined. If AI is used in medical diagnosis and treatment, it may pose a substantial problem for informed consent. The short version of the problem is this: medical AI will likely surpass human doctors in accuracy, meaning that patients have a prudential reason to prefer treatment from an AI. However, given the black box problem, medical AI cannot explain to patients how it makes decisions, yet such an explanation seems to be required by informed consent. Thus, it seems that doing what is best for patients (treatment via AI), even if patients want to permit this, might be prohibited by medicine’s commitment to informed consent. Conflicts between beneficence and autonomy are not new, but medical AI poses a novel version of this conflict, because this problem is one in which even if the patient says they want to use their autonomy to receive better care, the commitment to autonomy (via informed consent) seems to block them from doing so. Given this dilemma, should we abandon informed consent, or should we not use medical AI? My thesis is that we can have our cake and eat it too; we can use opaque AI in clinical medicine and retain our commitment to informed consent, although it may require revising our understanding of informed consent. Specifically, it will require us to distinguish between two levels of consent (higher-order and first-order consent).

Ethical approval and informed consent in mental health research: a scoping review

Ethical approval and informed consent in mental health research: a scoping review

Leona Cilar Budler, Gregor Stiglic

AI and Society, 1 May 2025

Abstract

Although there is a wide range of scientific papers introducing artificial intelligence techniques in the mental health field, there is a lack of literature assessing the reporting of ethical concerns in such studies. In addition, it is not yet known whether the authors seek ethical approval or informed consent while performing such research. This study aimed to investigate the extent to which studies in the mental health domain that utilize chatbots either ignore or incompletely disclose patient consent and ethical approval from the responsible review boards. A scoping literature search was performed in PsychARTICLES, PubMed, and Web of Science using both MeSH terms and free-text keywords. Following PRISMA-ScR guidelines, we also contacted study authors to verify missing information about ethical approval or informed consent, enhancing the transparency and rigor of our analysis. Among the 27 studies reviewed, 13 reported obtaining ethical approval, and 16 reported collecting informed consent. The remaining studies did not provide such information. These findings underscore the ethical complexities surrounding AI in mental health, especially regarding the collection, storage, and use of sensitive patient data. There is a correlation between sample size and the acquisition of ethical approval, particularly in studies published in journals with low-impact factors. Future research should investigate the role of journal policies in influencing ethical practices. In addition, training programs could be developed to educate researchers on the importance of ethics, particularly in studies with smaller sample sizes.

Whose Reality? Consent Boundaries and Free Speech Arguments in the Politics of Generative AI

Whose Reality? Consent Boundaries and Free Speech Arguments in the Politics of Generative AI

Sara Concetta Santoriello

Politikon: The IAPSS Journal of Political Science, 28 April 2025

Abstract

Generative AI enables creation of increasingly realistic deepfakes that challenge content authenticity assessment. This research examines how anti-woke opinion leaders frame deepfake technology within broader cultural discourse. Through narrative analysis of statements and media between 2018 and 2024, we identify significant inconsistencies in these figures’ approaches to consent and bodily autonomy. While championing unrestricted speech when deepfakes target women, minorities, or political opponents, these commentators often advocate for regulation when personally affected. This selective application of principles reveals how deepfake technology disproportionately impacts minoritized groups while reinforcing existing power hierarchies. The research exposes fundamental tensions within anti-woke discourse between freedom of expression and protection from exploitation. Ultimately, deepfakes serve as a lens through which to understand broader ideological inconsistencies around technological governance, highlighting the urgent need for consent-based approaches to synthetic media regulation.

Enhancing informed consent in oncological surgery through digital platforms and artificial intelligence

Enhancing informed consent in oncological surgery through digital platforms and artificial intelligence
Review Article
Alex Boddy
Clinical Surgical Oncology, June 2025
Open Access
Abstract
Informed consent is a cornerstone of ethical medical practice, particularly in high-stakes oncological surgery where treatment options are complex and risks are significant. This paper explores the potential of digital platforms and artificial intelligence (AI) to enhance the informed consent process. The traditional consent process, reliant on face-to-face interactions and paper-based documentation, is increasingly being supplemented by digital solutions that offer remote consultations, personalized patient information, and electronic consent forms. These digital pathways not only improve accessibility and patient comprehension but also streamline documentation, reducing errors and administrative burdens. AI technologies, including ambient digital scribes and large language models (LLMs), could further augment this process by generating personalized risk assessments, simplifying complex medical information, and facilitating multilingual communication. However, success will also depend on addressing ethical concerns, ensuring equitable access, and preserving the irreplaceable human connection between patients and clinicians. By augmenting rather than replacing clinician expertise, digital platforms and AI can empower patients to make truly informed decisions in oncological care.

The Digital Double: Data Privacy, Security, and Consent in AI Implants

The Digital Double: Data Privacy, Security, and Consent in AI Implants
Research Article
Omid Panahi, Soren Falkner
Digital Journal of Engineering Science and Technology, 17 March 2025
Open Access
Abstract
Artificial intelligence (AI) implants are rapidly emerging as a transformative technology with the potential to revolutionize healthcare, enhance human capabilities, and blur the boundaries between humans and machines. However, the integration of AI into the human body raises complex ethical, legal, and social questions, particularly concerning data privacy, security, and consent. This paper explores the concept of the “digital double,” a virtual representation of an individual generated from the data collected by AI implants. It examines the potential benefits and risks of creating and utilizing digital doubles, focusing on the implications for data privacy, security, and informed consent. The paper analyses the challenges of protecting sensitive health information, ensuring data security, and obtaining meaningful consent from individuals with AI implants. It also discusses the potential for misuse and abuse of digital doubles, including unauthorized access, surveillance, and discrimination. Finally, the paper proposes a framework for addressing these challenges, emphasizing the need for robust data protection measures, transparent consent processes, and ethical guidelines to safeguard individual autonomy and privacy in the age of AI implants.

Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI

Editor’s Note:
The following Barnes et al. article “ Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI” has been previously shared in this digest. We are sharing it again as this target article in the American Journal of Bioethics has resulted in a number of peer commentaries which follow below. These commentaries offer a range of perspectives on biobanking, blockchain and generative AI and consent. These are areas which we continue to examine in our work.  

Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI
Caspar Barnes, Mateo Riobo Aboy, Timo Minssen, Jemima Winifred Allen, Brian D. Earp, Julian Savulescu
The American Journal of Bioethics, 5 November 2024
Abstract
Participation in research is supposed to be voluntary and informed. Yet it is difficult to ensure people are adequately informed about the potential uses of their biological materials when they donate samples for future research. We propose a novel consent framework which we call “demonstrated consent” that leverages blockchain technology and generative AI to address this problem. In a demonstrated consent model, each donated sample is associated with a unique non-fungible token (NFT) on a blockchain, which records in its metadata information about the planned and past uses of the sample in research, and is updated with each use of the sample. This information is accessible to a large language model (LLM) customized to present this information in an understandable and interactive manner. Thus, our model uses blockchain and generative AI technologies to track, make available, and explain information regarding planned and past uses of donated samples.

Demonstrated Consent and the Common Good: On Withdrawal of Consent in Stem Cell Research

Demonstrated Consent and the Common Good: On Withdrawal of Consent in Stem Cell Research
Open Peer Commentaries
Tijs Rosema, Martine de Vries, Hanna Lammertse, Roland Bertens, Nienke de Graeff
American Journal of Bioethics, 7 April 2025
Excerpt
    Barnes et al. (Citation2025) argue that demonstrated consent enhances donor autonomy. This is because demonstrated consent offers donors “ongoing accessibility of information according to donor preferences” and so gives donors “actionable rights to reassess or withdraw consent” (Barnes et al. Citation2025, 99).
Since demonstrated consent uses broad consent as default, it allows researchers to conduct various research projects based on a single initial consent procedure, and so helps contribute to societal interests (Barnes et al. Citation2025). Therefore, demonstrated consent meets the so-called “balance criterion” which Barnes et al. (Citation2025) introduced to underline that informed consent frameworks should also balance donor autonomy with broader societal interests, including progress in science and medicine.
But what does the balance criterion imply for situations in which donor autonomy leads to significant negative consequences for societal interests? This question may arise when donors withdraw their consent. By taking stem cell research as an example, we reason that although demonstrated consent enhances donor autonomy, the exercise of donor autonomy by withdrawing consent should not always lead to the discontinuation of research.
We argue that the right of withdrawal can be limited in stem cell research if a donor is properly informed about limits of withdrawal when providing initial consent. Additionally, we see opportunities for demonstrated consent to compensate for this proposed limitation of donor autonomy. We thus provide a more detailed elaboration on demonstrated consent and the balance criterion in the context of stem cell research…

On the Complexities of Enabling Demonstrated Consent

On the Complexities of Enabling Demonstrated Consent
Open Peer Commentaries
Panagiotis Alexiou, Joel Azzopardi, Claude Julien Bajada, Jean-Paul Ebejer, Gillian M. Martin, Nikolai Paul Pace
American Journal of Bioethics, 7 April 2025
Excerpt
Barnes et al. (Citation2025) introduce a novel vision for biobanking consent in their article “Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI.” Their concept of “demonstrated consent” leverages blockchain technology and non fungible tokens, along with large language models to enhance transparency and participant engagement. We put this novel framework in context of existing approaches, and highlight some key questions that arise.
The central question that Barnes et al. seek to address is the inherent limitations of the following two traditional consent models in biobanking, namely:

  • Study-specific consent: A type of consent where individuals give permission for the use of their bio-samples and data to be used in a single, well defined, research project. While ethically robust, it is often impractical in the context of long-term biobanking due to the increased administrative burden and inflexibility.
  • Broad consent: A type of consent where individuals allow their bio-samples and data to be used in future, sometimes unspecified, research projects, with few or no specific restrictions. Importantly, without the need to be re-contacted or consulted. This approach is more efficient, but can undermine the autonomy of participants by failing to provide sufficient information about future research uses. Broad consent needs to be paired with strategies of risk mitigation, and continuous provision of information to participants.

In response to these challenges, various alternative models have been proposed, including tiered informed consent (Tiffin Citation2018), meta-consent (Ploug and Holm Citation2016), and dynamic consent (Kaye et al. Citation2015; Budin-Ljøsne et al. Citation2017) These models try to increase the involvement of participants, but are vulnerable to issues similar to study-specific consent. Dynamic consent in particular, has gained traction as a means of allowing participants to dynamically update their consent preferences in real time, thus tailoring their participation to studies based on their preferences. Individuals however, have to constantly manage their consent, leading to potential choice overload and consent fatigue. The “demonstrated consent” model proposed by Barnes et al. aims to circumvent these issues by providing a secure, transparent, and easily accessible source of information, without requiring participants to continuously manage their preferences. The central contribution of this manuscript is the proposed integration of non-fungible tokens (NFTs) and large language models (LLMs) to tackle these issues…

Challenges to Demonstrated Consent in Biobanking: Technical, Ethical, and Regulatory Considerations

Challenges to Demonstrated Consent in Biobanking: Technical, Ethical, and Regulatory Considerations
Open Peer Commentaries
Jasmine E. McNealy, Megan Doerr
American Journal of Bioethics, 7 April 2025
Excerpt
We read with interest Barnes and colleagues’ recent article, “Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI” (Barnes et  al. 2025). We appreciate their efforts to succinctly ground their proposal within consent scholarship and their distillation of the ethical challenges of informed consent for repository contexts. Like many, we are vocal advocates for improving the informed consent process, especially within repository enabled research (Doerr et  al. 2021). We also strongly support the creative use of technology to mitigate consent’s shortcomings (Moore et  al. 2017; Kraft and Doerr 2018). However, we are concerned that Barnes et  al.’s proposal faces several critical technical challenges to implementation, does not account for key features of repository enabled research, and adds novel regulatory concerns to the mix…