A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent

A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent
Open Peer Commentaries
Matthew Elmore, Nicoleta Economou-Zavlanos, Michael Pencina
The American Journal of Bioethics, 24 February 2025
Excerpt
    The principle of respect for autonomy, often expressed as the right to understand and make decisions about one’s care, has recently gained attention in AI-related bioethics. Hurley et al. (2025) have made an important contribution by examining the Whitehouse’s Blueprint for an AI Bill of Rights, asking how the right to notice and explanation might apply in healthcare contexts. They propose three possible functions for this right in patient care: (1) to provide a simple “FYI” to patients about the use of AI; (2) to foster education and trust; and (3) to serve as part of a patient’s right to informed consent.
This commentary offers a heuristic for determining how best to plot these three aims (Table 1). Simplifying the recent work of Rose and Shapiro (2024), our heuristic lays out the functions described by Hurley et al. on a four-quadrant grid, scaling notification practices along two axes: the degree of AI autonomy and the degree of clinical risk. The need for robust consent increases when clinical risk is higher and when AI has greater autonomy in decision-making. This heuristic is adaptable to institution-specific measures of clinical risk, and it also provides flexibility for institutions to address their unique workflows…

Leave a comment