Informed Consent in Educational AI Research Needs to Be Transparent, Flexible, and Dynamic
Commentary
Alexander Skulmowski
Mind, Brain, and Education, 5 January 2025
Open Access
Abstract
Generative artificial intelligence (AI) has become a major research trend in the fields of education and psychology. However, several risks posed by this technology concerning the cognitive and socio-emotional development of children and adolescents have been identified. While it would be highly useful to have a clear understanding of these potential negative effects, empirical results cannot be obtained without putting the participants of these studies in a situation that potentially endangers their development. Research fields such as the biomedical sciences utilize several measures to minimize risks, such as dose escalation and stopping rules. In addition, dynamic and flexible forms of informed consent could be adopted by our field to maximize transparency. By including methodological advancements and ethical developments in the psychological and educational research process, risks could be averted, and the ethical soundness of AI research involving children and adolescents could be maintained.
Category: Artificial Intelligence
Developing Project-Specific Consent Documents: A Registered Report for a Multistep Approach Using LLMs
Developing Project-Specific Consent Documents: A Registered Report for a Multistep Approach Using LLMs
Filipa Lopes, Carolina Trindade, Tânia Carvalho, Maria Strecht Almeida, Ana Sofia Carvalho
Drug Repurposing, 18 December 2024
Abstract
Within the scope of clinical trials, developing participant information sheets and informed consent forms is a complex task that demands clarity, precision, and compliance with regulatory standards. Developing these documents is crucial for ensuring that participants are fully informed about the research in which they are involved. However, the process is often time-consuming and resource-intensive. In this context, we present the development of a methodology enabling the use of Large Language Models to assist in the creation of information sheets and informed consent forms for clinical trials according to a predesigned template. This research is being conducted within the framework of the project REPO4EU (Precision drug REPurpOsing For EUrope and the world).
Ethical Challenges in the Integration of Artificial Intelligence in Palliative Care
Ethical Challenges in the Integration of Artificial Intelligence in Palliative Care
Abiodun Adegbesan, Adewunmi Akingbola, Olajide Ojo, Otumara Urowoli Jessica, Uthman Hassan Alao, Uchechukwu Shagaya, Olajumoke Adewole, Owolabi Abdullahi
Journal of Medicine, Surgery, and Public Health, December 2024
Abstract
The integration of artificial intelligence (AI) into palliative care offers the possibility of improved patient outcomes through enhanced decision-making, personalized care, and reduced healthcare provider burden. However, the use of AI in this sensitive area presents significant ethical challenges which require serious consideration to ensure that technology serves the best interests of patients without compromising their rights or well-being. This narrative review explores the key ethical issues associated with AI in palliative care, with a focus on low-resource settings where these challenges are often intensified. The review examines essential ethical principles such as autonomy, beneficence, non-maleficence, and justice, and identifies critical concerns including data privacy, informed consent, algorithmic bias, and the risk of depersonalizing care. It also highlights the unique difficulties faced in low-resource environments, where the lack of infrastructure and regulatory frameworks can exacerbate these ethical risks. To address these challenges, the review offers actionable recommendations, such as developing context-specific guidelines, promoting transparency and accountability through explainable AI (XAI), and conducting regular ethical audits. Interdisciplinary collaboration is emphasized to ensure that AI systems are ethically designed and implemented, respecting cultural contexts and upholding patient dignity. This study contributes to the ongoing discourse on ethical AI integration in healthcare, indicating the need for careful consideration of ethical principles to ensure that AI enhances rather than undermines the compassionate care at the heart of palliative care. These findings serve as a foundation for future research and policy development in this emerging field.
Ethical Considerations in Using AI for Mental Health Diagnosis and Treatment Planning: A Scoping Review
Ethical Considerations in Using AI for Mental Health Diagnosis and Treatment Planning: A Scoping Review
Yewande Ojo
Proceedings of the International Conference on Artificial Intelligence and Robotics; Yaba Nigeria, 26-28 November 2024
Abstract
Integrating Artificial Intelligence (AI) with mental healthcare presents a paradigm shift in diagnosis and treatment planning, offering potential efficiency, accuracy, and personalisation improvements. However, this technological advancement allows for the exploration of a complex array of ethical challenges that demand careful consideration. This research explores the vital ethical dimensions surrounding the adoption of AI in mental health contexts, emphasising the reason for a balanced approach that maximises benefits while mitigating risks.
Central to these considerations is the imperative of privacy and data protection. This type of mental health information requires comprehensive robust safeguards to prevent unauthorised access or misuse while allowing for responsible data utilisation to drive AI-powered advancements. The assurance of fairness and non-discrimination in AI systems is critical, as racial bias could exacerbate disparities in mental healthcare access and outcomes. Transparency and explainability emerge as crucial factors in fostering trust and accountability. AI systems must be capable of providing clear rationales for their diagnostic and proposed treatment planning, which aids clinicians and patients to make informed decisions. This transparency is intimately linked to the principles of autonomy and informed consent, requiring that individuals fully understand the role of AI in their treatment and have the agency to accept or decline its use.
The integration of AI also necessitates a reevaluation of professional ethics and responsibilities for mental health practitioners. As AI systems assume more significant roles in diagnosis and treatment planning, the boundaries of professional judgment and accountability must be delineated. Moreover, the broader societal implications, including potential changes in public perception of mental healthcare and shifts in the healthcare workforce, warrant careful consideration.
Regulatory and governance frameworks play a pivotal role in addressing these ethical challenges. Policymakers face the complex task of developing adaptive regulations that foster innovation while ensuring robust ethical safeguards. This requires a collaborative approach involving clinicians, researchers, ethicists, patients, and technology developers.
Enhancing patient understanding in obstetrics: The role of generative AI in simplifying informed consent for labor induction with oxytocin
Enhancing patient understanding in obstetrics: The role of generative AI in simplifying informed consent for labor induction with oxytocin
Amos Grünebaum, Joachim Dudenhausen, Frank A. Chervenak
Journal of Perinatal Medicine, 30 October 2024
Abstract
Informed consent is a cornerstone of ethical medical practice, particularly in obstetrics where procedures like labor induction carry significant risks and require clear patient understanding. Despite legal mandates for patient materials to be accessible, many consent forms remain too complex, resulting in patient confusion and dissatisfaction. This study explores the use of Generative Artificial Intelligence (GAI) to simplify informed consent for labor induction with oxytocin, ensuring content is both medically accurate and comprehensible at an 8th-grade readability level. GAI-generated consent forms streamline the process, automatically tailoring content to meet readability standards while retaining essential details such as the procedure’s nature, risks, benefits, and alternatives. Through iterative prompts and expert refinement, the AI produces clear, patient-friendly language that bridges the gap between medical jargon and patient comprehension. Flesch Reading Ease scores show improved readability, meeting recommended levels for health literacy. GAI has the potential to revolutionize healthcare communication by enhancing patient understanding, promoting shared decision-making, and improving satisfaction with the consent process. However, human oversight remains critical to ensure that AI-generated content adheres to legal and ethical standards. This case study demonstrates that GAI can be an effective tool in creating accessible, standardized, yet personalized consent documents, contributing to better-informed patients and potentially reducing malpractice claims.
Evaluating AI-Generated informed consent documents in oral surgery: A comparative study of ChatGPT-4, Bard gemini advanced, and human-written consents
Evaluating AI-Generated informed consent documents in oral surgery: A comparative study of ChatGPT-4, Bard gemini advanced, and human-written consents
Luigi Angelo Vaira, Jerome R. Lechien, Antonino Maniaci, Giuseppe Tanda, Vincenzo Abbate, Fabiana Allevi, Antonio Arena, Giada Anna Beltramini, Michela Bergonzani, Alessandro Remigio Bolzoni, Salvatore Crimi, Andrea Frosolini, Guido Gabriele, Fabio Maglitto, Miguel Mayo-Yáñez, Ludovica Orrù, Marzia Petrocelli, Resi Pucci, Alberto Maria Saibene, Stefania Troise, Giacomo De Riu
Journal of Cranio-Maxillofacial Surgery, 26 October 2024
Open Access
Abstract
This study evaluates the quality and readability of informed consent documents generated by AI platforms ChatGPT-4 and Bard Gemini Advanced compared to those written by a first-year oral surgery resident for common oral surgery procedures. The evaluation, conducted by 18 experienced oral and maxillofacial surgeons, assessed consents for accuracy, completeness, readability, and overall quality.
ChatGPT-4 consistently outperformed both Bard and human-written consents. ChatGPT-4 consents had a median accuracy score of 4 [IQR 4-4], compared to Bard’s 3 [IQR 3–4] and human’s 4 [IQR 3–4]. Completeness scores were higher for ChatGPT-4 (4 [IQR 4–5]) than Bard (3 [IQR 3–4]) and human (4 [IQR 3–4]). Readability was also superior for ChatGPT-4, with a median score of 4 [IQR 4–5] compared to Bard and human consents, both at 4 [IQR 4-4] and 4 [IQR 3–4], respectively. The Gunning Fog Index for ChatGPT-4 was 17.2 [IQR 16.5–18.2], better than Bard’s 23.1 [IQR 20.5–24.7] and the human consents’ 20 [IQR 19.2–20.9].
Overall, ChatGPT-4’s consents received the highest quality ratings, underscoring AI’s potential in enhancing patient communication and the informed consent process. The study suggests AI can reduce misinformation risks and improve patient understanding, but continuous evaluation, oversight, and patient feedback integration are crucial to ensure the effectiveness and appropriateness of AI-generated content in clinical practice.
Comparing ChatGPT vs. Surgeon-Generated Informed Consent Documentation for Plastic Surgery Procedures
Comparing ChatGPT vs. Surgeon-Generated Informed Consent Documentation for Plastic Surgery Procedures
Ishan Patel, Anjali Om, Daniel Cuzzone, Gabriela Garcia Nores
Aesthetic Surgery Journal, 22 October 2024
Abstract
Background
Informed consent is a crucial requirement of a patient’s surgical care but can be a burdensome task. Artificial intelligence (AI) and machine learning language models may provide an alternative approach to writing detailed, readable consent forms in an efficient manner. No studies have assessed the accuracy and completeness of AI-generated consents for aesthetic plastic surgeries.
Objectives
This study aims to compare the length, reading level, accuracy, and completeness of informed consent forms that are AI chatbot (ChatGPT-4, OpenAI, San Francisco, CA)-generated versus plastic surgeon-generated for the most commonly performed aesthetic plastic surgeries.
Methods
This study is a cross-sectional design comparing informed consent forms created by the American Association of Plastic Surgeons (ASPS) with informed consent forms generated by ChatGPT-4 for the five most commonly performed plastic surgery procedures: liposuction, breast augmentation, abdominoplasty, breast lift, and blepharoplasty.
Results
The average word count of ChatGPT forms was lower than for the ASPS generated forms (1023 vs 2901, p=0.01). Average reading level for ChatGPT forms was also lower than ASPS forms (11.2 vs 12.5, p=0.02). There was no difference between accuracy and completeness scores for general descriptions of the surgery, risks, benefits, or alternatives. The mean overall impression score for ChatGPT consents was 2.33, whereas it was 2.23 for ASPS consent forms (p=0.18).
Conclusions
Our study demonstrates that informed consent forms generated by ChatGPT were significantly shorter and more readable than ASPS forms with no significant difference in completeness and accuracy.
Automated informed consent
Automated informed consent
Research article
Adam John Andreotta, Björn Lundgren
Big Data & Society, 18 October 2024
Open access
Abstract
Online privacy policies or terms and conditions ideally provide users with information about how their personal data are being used. The reality is that very few users read them: they are long, often hard to understand, and ubiquitous. The average internet user cannot realistically read and understand all aspects that apply to them and thus give informed consent to the companies who use their personal data. In this article, we provide a basic overview of a solution to the problem. We suggest that software could allow users to delegate the consent process and consent could thus be automated. The article investigates the practical feasibility of this idea. After suggesting that it is feasible, we develop some normative issues that we believe should be addressed before automated consent is implemented.
Editor’s Note: We are concerned that the core argument being made in this article challenges the integrity of the informed consent process.
Informed consent in the age of smart technologies
Informed consent in the age of smart technologies
Jaana Leikas, Arja Halkoaho, Marinka Lanne
Finnish Journal of eHealth and eWelfare, 14 October 2024
Abstract
Technology is increasingly being brought into the home care of older people. Digitalization is seen as an enabler for efficient and resource-saving operations. In the use of technology, informed consent is considered an ethical practice and part of a responsible home care service system. The aim of this article is to describe the problem of informed consent in situations where emerging technologies, such as artificial intelligence (AI) and mass data, are used as part of welfare services and home care for older people. The article discusses principles and ways to better integrate informed consent as an ethical practice into a responsible home care service system.
A qualitative study was carried out to gather the views of experts in the field of elderly care and ethics. A content analysis of a semi-structured focus group was used to explore perceptions of the changing nature of informed consent. According to our findings, the informed consent model requires updating. The key is to embrace the idea that consent is a living process designed to respect people’s autonomous choices and protect them from risk. If the nature of the use of the data collected from individuals changes significantly in the future, the consent should also be updated to reflect this change. This aspect is important because new technologies will change the nature of the collection and use of the data. Mass data collection combines multiple databases so that the resulting data can be used even far from the original purpose or context in which it was collected. Therefore, consent should always be tailored to the context, allowing sufficient time for the person seeking and giving consent to clarify the content of the consent. This process highlights the importance of understanding the agency of the consent giver.
Using Large Language Models to Create Patient Centered Consent Forms
Using Large Language Models to Create Patient Centered Consent Forms
Beattie, S. Neufeld, D.X. Yang, C. Chukwuma, N.B. Desai, M. Dohopolski, S.B. Jiang
International Journal of Radiation Oncology, Biology, Physics, 1 October 2024
Abstract
Purpose/Objective(s)
Understanding informed consent forms, which outline the risks, costs, and procedures of clinical trials, presents a significant challenge for patients due to their complex language and length. Recognizing that such complexity can impede patient comprehension and decision-making, this study proposes the use of Large Language Models (LLMs) to distill these forms into concise, easy-to-understand cover pages. We hypothesize that we can significantly improve the readability of these forms using LLMs.
Materials/Methods
Five approved institutional clinical trial consent forms were assessed for their readability using SMOG and the Flesch-Kincaid Ease of Reading score using Python libraries. The documents were segmented and catalogued in a vector database to facilitate similarity searches. OpenAI’s GPT-4 API was prompted to extract information about costs, payments, contact information, eligibility criteria, treatments, duration, and requirements based on relevant information retrieved from the vector database. Furthermore, each prompt included instructions to respond at a 7th to 8th-grade reading level. The answers were collected, and their readability evaluated using the aforementioned metrics. Finally, the SMOG and Flesch-Kincaid readability scores were compared using a Wilcoxon signed-rank test.
Results
The original informed consent forms exhibited an average SMOG score of 16.74 (± 0.86), corresponding to a graduate reading level, and an average Flesch-Kincaid score of 12.93 (± 0.70), corresponding to an undergraduate reading level. Conversely, the LLM-generated information sheets exhibited an average SMOG score of 11.25 (± 0.50), corresponding to a high school reading level, and an average Flesch-Kincaid score of 8.15 (± 0.68), corresponding to an 8-9th grade reading level, demonstrating a significant reduction in reading level (p<0.05 for both SMOG and Flesh-Kincaid scores). Some LLM-generated information sheets were successful in adhering to a 7th to 8th-grade reading level, as denoted by a Flesch-Kincaid score < 8.
Conclusion
Our approach successfully simplified clinical trial consent forms using LLMs, reducing reading levels and making information more accessible. This approach could significantly enhance the transparency and comprehensibility of the clinical trial consent process, fostering a more patient-centric approach. Feedback from patients on LLM-generated information sheets could provide invaluable insights into the practicality and usefulness of this approach.