Disclosure as Absolution in Medicine: Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence
Guest Editorial
Kayte Spector-Bagdady, Alex John London
The American Journal of Bioethics, 24 February 2025
Introduction
The rush to deploy artificial intelligence (AI) and machine learning (ML) systems in medicine highlights the need for bioethics to deepen its normative engagement in disentangling autonomy from beneficence and justice in responsible medical practice. One of the reasons that informed consent is such a unique tool is its morally transformative nature. Actions that would otherwise be illegal or unethical are rendered permissible by the provision of free and informed consent. But consent is not a panacea to absolve all risks and burdens. The proliferation of AI/ML systems highlights that every additional call for disclosure warrants a deep introspection of goals, and of what values they reflect (Hurley et al. 2025).
For example, while informed consent might be appropriate when there is a choice whether to use an AI tool in clinical care, we cannot let deference to autonomy substitute for rigorous standards—based in beneficence and justice—that ensure the safe, effective, and equitable deployment of AI in medicine. Shortcomings in AI technologies that do not meet those standards cannot otherwise be absolved through the informed consent process. The assumption that patients are empowered to assess or alleviate such deficiencies is misguided. While much has been written about the inability of informed consent to bear its increasing transformative burden (Grady et al. 2017), further exploration of the appropriate division of moral labor between ethical values in the use of AI in clinical practice is warranted.