Clarifying When Consent Might Be Illusory in Notice and Explanation Rights

Clarifying When Consent Might Be Illusory in Notice and Explanation Rights
Bryan Pilkington, Charles E. Binkley
The American Journal of Bioethics, 24 February 2025
Excerpt
In “Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care,” Hurley et al. (2025) helpfully summarize key features of the AI Bill of Rights, focusing on the right to notice and explanation (RNandE) and arguing that greater clarity, including a specified goal, is needed. Though we concur with their overall assessment and appreciate the authors’ reliance on our work (Binkley and Pilkington 2023a) on consent, further clarification is needed, as some of the nuance of our position appears to have been lost in their goal categorizations. In exploring the normative function of RNandE, the authors ask what RNandE “is meant to achieve and how (and why) it is morally important at the individual patient level in healthcare?” We hold that for the three categories of use cases that the authors reference (chatbot, diagnostic and prognostic), the benefit to patients in claiming a RNandE of AI systems in their healthcare is not that AI models per se will be used. Rather the benefit to patients is the RNandE about how the output of the AI systems (chats, diagnoses, prognoses) will be used in their care, how patients will be informed, and with whom, besides patients and the clinicians involved in their care, the information will be shared.

Leave a comment