Artificial intelligence in educational games and consent under general data protection regulation

Artificial intelligence in educational games and consent under general data protection regulation
Eirini Mougiakou, Spyros Papadimitriou, Konstantina Chrysafiadi, Maria Virvou
Intelligent Decision Technologies, 18 March 2025
Abstract
As Artificial Intelligence becomes increasingly integrated into educational games, conforming with the General Data Protection Regulation (GDPR)—a legal framework governing data protection and privacy in the European Union—remains an important yet complex challenge, particularly when minors are involved. Users are required to provide consent multiple times, often unexpectedly, at different game levels. This process is further complicated by the varying durations for which consent remains valid. As a result, users—especially minors—may become confused about the consent they have given. Additional concerns arise when the educational game is AI-equipped. If AI is not involved, no new data is generated. However, if AI is present, new data is continuously produced, necessitating ongoing consent. For example, a user may consent to personalisation, which could lead the game to categorise them in unintended ways, such as labelling them a ‘poor student’. This paper explores GDPR challenges in AI-empowered educational games, focusing on user consent, AI-inferred data, and compliance gaps. Intelligent educational games rely on adaptive decision-making algorithms to personalise learning experiences, making them a subset of Intelligent Decision Technologies. Our research is based on a fuzzy-based educational game developed as a testbed for studying GDPR compliance in AI-driven decision-making. The findings provide insights into ethical AI governance, dynamic consent management, and the intersection of regulatory compliance with adaptive, data-driven decision systems in intelligent educational technologies. Based on our research, not all personal data exist from the beginning and upon original consent granting, as personal data are also generated throughout the process.

Editor’s note: we recognise that the proposals in this article are at odds with a number of regulatory structures.

Patient consent for the secondary use of health data in artificial intelligence (AI) models: A scoping review

Patient consent for the secondary use of health data in artificial intelligence (AI) models: A scoping review
Khadijeh Moulaei, Saeed Akhlaghpour, Farhad Fatehi
International Journal of Medical Informatics, 8 March 2025
Abstract
Background
The secondary use of health data for training Artificial Intelligence (AI) models holds immense potential for advancing medical research and healthcare delivery. However, ensuring patient consent for such utilization is paramount to uphold ethical standards and data privacy. Patient informed consent means patients are fully informed about how their data will be collected, used, and protected, and they voluntarily agree to allow their data to be used for AI models. In addition to formal consent frameworks, establishing a social license is critical to foster public trust and societal acceptance for the secondary use of health data in AI systems. This study examines patient consent practices in this domain.
Method
In this scoping review, we searched Web of Science, PubMed, and Scopus. We included studies in English that addressed the core issues of interest, namely, privacy, security, legal, and ethical issues related to the secondary use of health data in AI models. Articles not addressing the core issues, as well as systematic reviews, meta-analyses, books, letters, conference abstracts, and study protocols were excluded. Two authors independently screened titles, abstracts, and full texts, resolving disagreements with a third author. Data was extracted using a data extraction form.
Results
After screening 774 articles, a total of 38 articles were ultimately included in the review. Across these studies, a total of 178 barriers and 193 facilitators were identified. We consolidated similar codes and extracted 65 barriers and 101 facilitators, which we then categorized into four themes: “Structure,” “People,” “Physical system,” and “Task.” We identified notable emphasis on “Legal and Ethical Challenges” and “Interoperability and Data Governance.” Key barriers included concerns over privacy and security breaches, inadequacies in informed consent processes, and unauthorized data sharing. Critical facilitators included enhancing patient consent procedures, improving data privacy through anonymization, and promoting ethical standards for data usage.
Conclusion
Our study underscores the complexity of patient consent for the secondary use of health data in AI models, highlighting significant barriers and facilitators within legal, ethical, and technological domains. We recommend the development of specific guidelines and actionable strategies for policymakers, practitioners, and researchers to improve informed consent, ensuring privacy, trust, and ethical use of data, thereby facilitating the responsible advancement of AI in healthcare.

Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care

Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care
Meghan E Hurley, Benjamin H Lang, Kristin Marie Kostick-Quenet, Jared N Smith, Jennifer Blumenthal-Barby
The American Journal of Bioethics, March 2025
Abstract
Given the need for enforceable guardrails for artificial intelligence (AI) that protect the public and allow for innovation, the U.S. Government recently issued a Blueprint for an AI Bill of Rights which outlines five principles of safe AI design, use, and implementation. One in particular, the right to notice and explanation, requires accurately informing the public about the use of AI that impacts them in ways that are easy to understand. Yet, in the healthcare setting, it is unclear what goal the right to notice and explanation serves, and the moral importance of patient-level disclosure. We propose three normative functions of this right: (1) to notify patients about their care, (2) to educate patients and promote trust, and (3) to meet standards for informed consent. Additional clarity is needed to guide practices that respect the right to notice and explanation of AI in healthcare while providing meaningful benefits to patients.

Editor’s note: The following five articles are commentaries on this article which appeared in the American Journal of Bioethics.

Disclosure as Absolution in Medicine: Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence

Disclosure as Absolution in Medicine: Disentangling Autonomy from Beneficence and Justice in Artificial Intelligence
Guest Editorial
Kayte Spector-Bagdady, Alex John London
The American Journal of Bioethics, 24 February 2025
Introduction
    The rush to deploy artificial intelligence (AI) and machine learning (ML) systems in medicine highlights the need for bioethics to deepen its normative engagement in disentangling autonomy from beneficence and justice in responsible medical practice. One of the reasons that informed consent is such a unique tool is its morally transformative nature. Actions that would otherwise be illegal or unethical are rendered permissible by the provision of free and informed consent. But consent is not a panacea to absolve all risks and burdens. The proliferation of AI/ML systems highlights that every additional call for disclosure warrants a deep introspection of goals, and of what values they reflect (Hurley et al. 2025).
For example, while informed consent might be appropriate when there is a choice whether to use an AI tool in clinical care, we cannot let deference to autonomy substitute for rigorous standards—based in beneficence and justice—that ensure the safe, effective, and equitable deployment of AI in medicine. Shortcomings in AI technologies that do not meet those standards cannot otherwise be absolved through the informed consent process. The assumption that patients are empowered to assess or alleviate such deficiencies is misguided. While much has been written about the inability of informed consent to bear its increasing transformative burden (Grady et al. 2017), further exploration of the appropriate division of moral labor between ethical values in the use of AI in clinical practice is warranted.

Beyond Disclosure: Rethinking Patient Consent and AI Accountability in Healthcare

Beyond Disclosure: Rethinking Patient Consent and AI Accountability in Healthcare
Open Peer Commentaries
Tony Yang
The American Journal of Bioethics, 24 February 2025
Excerpt
    The growing integration of artificial intelligence (AI) into healthcare raises fundamental questions about patient consent, autonomy, and trust. The concept of a “Right to Notice and Explanation” (RN&E) articulated in Hurley et al.’s work highlights an essential ethical obligation: ensuring that patients are aware of AI’s role in their care and understand its implications (Hurley et al. 2025). While this framework is compelling, it can be further strengthened by situating it within broader regulatory, ethical, and operational contexts. Insights from recent regulatory developments, systematic reviews, and calls for responsible AI highlight the need to reconceptualize RN&E as a dynamic, participatory, and institutionally embedded process (Kleinberg et al. Citation2018).
One of the key gaps in current discussions on RN&E is the lack of clarity regarding what constitutes sufficient “notice” and “explanation” in practice (Hurley et al. 2025). While it is clear that patients should be informed when AI influences their care, it remains less clear how that information should be conveyed and at what level of technical detail. Explanations must balance transparency with simplicity, ensuring that patients understand the role of AI without being overwhelmed by technical complexity (Siala and Wang 2022). This can be achieved by designing explanations with a human-centered approach, focusing on health literacy and contextual relevance. For instance, AI explanations should be aligned with existing health literacy practices, leveraging the principles of plain language and clear communication to ensure patient comprehension (Sørensen et al. 2012). By embedding these elements into health system workflows, RN&E can move beyond a static “disclosure” model to become a participatory process in which patients are not only informed but also empowered to ask questions, seek clarification, and participate in decisions regarding their care…

Community-Based Consent Model, Patient Rights, and AI Explainability in Medicine

Community-Based Consent Model, Patient Rights, and AI Explainability in Medicine
Open Peer Commentaries
Aorigele Bao, Yi Zeng
The American Journal of Bioethics, 24 February 2025
Excerpt
…Considering the explainability issue in healthcare, we believe that a community-based consent model should be considered to supplement the notification and right to explanation model in healthcare. Here, the community-based consent model refers to the promotion of the protection of patients’ rights to notification and explanation in the field of healthcare through the construction and maintenance of a community with diverse representatives. Specifically, the community-based consent model can be understood as a hybrid representative community composed of patient representatives, patient advocacy groups, disease-specific societies, or local healthcare structures that intervene as a supplementary reference in the process of using artificial intelligence for healthcare practices during the early development and clinical use of artificial intelligence tools for healthcare, ensuring patients’ rights to notice and explanation…

A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent

A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent
Open Peer Commentaries
Matthew Elmore, Nicoleta Economou-Zavlanos, Michael Pencina
The American Journal of Bioethics, 24 February 2025
Excerpt
    The principle of respect for autonomy, often expressed as the right to understand and make decisions about one’s care, has recently gained attention in AI-related bioethics. Hurley et al. (2025) have made an important contribution by examining the Whitehouse’s Blueprint for an AI Bill of Rights, asking how the right to notice and explanation might apply in healthcare contexts. They propose three possible functions for this right in patient care: (1) to provide a simple “FYI” to patients about the use of AI; (2) to foster education and trust; and (3) to serve as part of a patient’s right to informed consent.
This commentary offers a heuristic for determining how best to plot these three aims (Table 1). Simplifying the recent work of Rose and Shapiro (2024), our heuristic lays out the functions described by Hurley et al. on a four-quadrant grid, scaling notification practices along two axes: the degree of AI autonomy and the degree of clinical risk. The need for robust consent increases when clinical risk is higher and when AI has greater autonomy in decision-making. This heuristic is adaptable to institution-specific measures of clinical risk, and it also provides flexibility for institutions to address their unique workflows…

Clarifying When Consent Might Be Illusory in Notice and Explanation Rights

Clarifying When Consent Might Be Illusory in Notice and Explanation Rights
Bryan Pilkington, Charles E. Binkley
The American Journal of Bioethics, 24 February 2025
Excerpt
In “Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care,” Hurley et al. (2025) helpfully summarize key features of the AI Bill of Rights, focusing on the right to notice and explanation (RNandE) and arguing that greater clarity, including a specified goal, is needed. Though we concur with their overall assessment and appreciate the authors’ reliance on our work (Binkley and Pilkington 2023a) on consent, further clarification is needed, as some of the nuance of our position appears to have been lost in their goal categorizations. In exploring the normative function of RNandE, the authors ask what RNandE “is meant to achieve and how (and why) it is morally important at the individual patient level in healthcare?” We hold that for the three categories of use cases that the authors reference (chatbot, diagnostic and prognostic), the benefit to patients in claiming a RNandE of AI systems in their healthcare is not that AI models per se will be used. Rather the benefit to patients is the RNandE about how the output of the AI systems (chats, diagnoses, prognoses) will be used in their care, how patients will be informed, and with whom, besides patients and the clinicians involved in their care, the information will be shared.

Consent for genomic sequencing: a conversation, not just a form

Consent for genomic sequencing: a conversation, not just a form
Comment
Danya F. Vears
European Journal of Human Genetics, 3 March 2025
Open Access
Excerpt
How to obtain truly informed consent for genomic sequencing in clinical practice is a long-standing topic of debate in the field. Although some research (including interviews with health professionals and analysis of consent forms) has already been conducted in this space, these studies can often only give a vague indication of the nature of the information that patients or families might be provided. They are limited in that they assume that health professionals use the consent form as a template to guide the discussion with the patient when, in reality, very little research has been conducted that examines exactly how, if at all, consent forms are utilised in these consultations. The study conducted by Ellard et al. [1] goes a step further than most to explore what happens during a consent interaction when patients are offered diagnostic genomic sequencing within a public health service. Their team audio-recorded consent conversations between healthcare professionals and parents of paediatric patients offered WGS through the Genomic Medicine Service across seven regionally diverse NHS Trusts in the United Kingdom. However, it also raises further questions, including how to make a consent interaction into a legitimate conversation and what information is important to convey during this conversation…

Ethics of Digitalization and Artificial Intelligence in Mental Healthcare

Ethics of Digitalization and Artificial Intelligence in Mental Healthcare
Book Chapter
Emanuel Schwarz, Andreas Meyer-Lindenberg
Ethics in Psychiatry, 20 March 2025 [Springer]
Abstract
Digitalization in healthcare encompasses a broad spectrum of rapidly advancing technological developments, aimed at improving the effectiveness and quality of medical care. This shift in medical care provisioning gives rise to several ethical challenges that need to be successfully addressed to ensure the responsible clinical implementation of digital technologies, and of the associated research. This chapter provides an overview of these challenges across different technological fields relevant for healthcare digitalization, with particular focus on the aspects of pronounced relevance for mental health applications. It explores ethical considerations relevant to electronic health record systems and other data platforms, which are a cornerstone for the future development of healthcare solutions, such as through machine learning technology. We discuss important questions regarding informed consent and data reuse, as well as aspects relevant for digital mental health interventions. The chapter then focuses on mobile technology and data analytics, with particular emphasis on precision medicine approaches. Finally, digital twins are discussed as an upcoming technological development that brings about several unique ethical challenges. Addressing the ethical challenges of digital healthcare applications will be the basis for their responsible development and use, and will aid in building trust by different stakeholders that is fundamental for clinical care and future research.