Artificial Intelligence (AI) refers to technologies that mimic human cognitive functions such as learning, reasoning and decision-making (Suleimenov et al., 2020). In education, AI is used through tools like intelligent tutoring systems, automated grading platforms, virtual patients and chatbots (Holmes and Tuomi, 2022). Algorithmic bias refers to systematic errors in AI outputs that result in unfair or unequal treatment of users due to flawed training data or biased model design. Transparency refers to how clearly an AI system communicates its internal decision-making process to end users, including both students and educators (Baker and Hawn, 2022). For example, if an AI system is trained primarily on data from English-speaking students or high-resource academic settings, it may underperform or misinterpret inputs from learners with different linguistic, cultural, or socioeconomic backgrounds. This bias can affect how students are evaluated, how content is delivered and whose perspectives are represented. As AI becomes more embedded in education, the risks of such bias have become a growing concern.
Artificial Intelligence (AI) is increasingly used in nursing education through tools such as intelligent tutoring systems, virtual simulations and automated grading. While these technologies offer efficiency and personalized learning experiences, growing concerns about algorithmic bias and lack of transparency may undermine educational equity, compromise student outcomes and erode the trust of both learners and educators. When users are unable to understand how decisions are made or perceive outcomes as unfair, their confidence in AI tools and the institutions that adopt them can be significantly diminished (De Gagne, 2023). The global AI in healthcare education market is projected to surpass USD 5.6 billion by 2030, driven by rising demand for scalable, technology-enhanced training. These technologies promise to personalize instruction, enhance learner engagement and streamline educational assessments. Although AI holds great potential for nursing education, ethical challenges, especially algorithmic bias and lack of transparency are still not well understood or adequately addressed (Chang et al., 2022, Glauberman et al., 2023).
As AI becomes more embedded in education, its unintended effects demand closer scrutiny, particularly bias and lack of transparency, which often reinforce one another. Systems trained on narrow datasets may generate unequal outcomes, while opaque decision processes make it difficult to identify and correct such disparities (Rony et al., 2024). For instance, automated grading tools may undervalue writing styles or reasoning patterns that differ from dominant academic norms, while virtual patient simulations may fail to reflect the full range of cultural and clinical diversity encountered in real-world practice (Buchanan et al., 2020, Glauberman et al., 2023). Another key concern is that many AI systems are not transparent. Often referred to as "black boxes," these tools do not clearly show how they make decisions, making it difficult for students and educators to understand or question the outcomes. This lack of clarity weakens trust and reduces meaningful engagement with technology (Chang et al., 2022).
Although research on AI in nursing education is expanding, literature remains fragmented in its treatment of algorithmic bias. Most studies examine individual tools or isolated forms of bias without connecting these technical concerns to the broader ethical and pedagogical values that underpin nursing education. As a result, there is no integrated understanding of how different types of bias emerge, what structural or data-related factors contribute to them, or how they affect educational equity. Furthermore, mitigation strategies, such as inclusive data practices or explainability features are often presented without clear guidance for implementation. This review seeks to address these gaps by synthesizing diverse strands of evidence and providing a comprehensive, ethically grounded perspective on algorithmic bias. In doing so, it aims to support more equitable, transparent and context-sensitive integration of AI into nursing education.
This review addresses the growing need for a comprehensive understanding of how algorithmic bias influences nursing education. As AI tools increasingly shape learning, assessment and clinical simulation, it is essential to ensure that these technologies reflect and reinforce the profession’s core ethical commitments, namely, equity in educational opportunities, inclusivity in design and delivery and the cultivation of cultural competence among future practitioners. While these values are interconnected, they operate at different levels: equity and inclusivity shape the structure and accessibility of learning environments, while cultural competence is a critical outcome of those environments. When the causes and consequences of bias are poorly understood, AI systems may reinforce disparities that undermine all three. To address this gap, this scoping review synthesizes existing research on bias and transparency in AI tools used in nursing education. It aims to provide educators, developers and policymakers with a comprehensive understanding of how to design and implement equitable, trustworthy AI systems in alignment with nursing values. By bringing together insights on bias types, underlying causes and mitigation strategies, this review supports efforts to create more transparent, inclusive and ethically grounded learning environments. Based on this purpose, the objectives of the study are to:1.Identify commonly used AI tools in nursing education and examine how they introduce or reinforce algorithmic bias.
2.Analyze key dimensions of algorithmic bias, including transparency, explainability and their ethical and educational implications.
3.Explore existing strategies to mitigate bias and promote equitable, trustworthy use of AI in nursing education.
This review is grounded in an ethical framework that integrates core nursing values with contemporary AI ethics principles. Drawing on the American Nurses Association Code of Ethics and related guidance, the focus is on fairness, accountability, inclusivity, cultural competence and patient centeredness as foundational commitments in nursing education. From AI ethics, the principles of transparency, non-maleficence and justice are incorporated, as articulated by bodies such as WHO and UNESCO. AI tools are conceptualized as sociotechnical systems that mediate educational processes such as assessment, simulation and feedback and thereby influence outcomes including equity, trust and cultural competence. Institutional governance and AI literacy are treated as key contextual factors that can amplify or mitigate the impact of algorithmic bias and opacity. Importantly, these risks are not inherent to all AI tools and can be reduced through the human interface that surrounds implementation. Educator oversight, transparent assessment policies, routine auditing for differential performance and explainability features can help identify and correct inequities before they affect student outcomes. Accordingly, this review evaluates AI in nursing education as a sociotechnical system where design choices and implementation governance jointly shape whether AI supports or undermines equity and learning. This framework guides both the extraction of evidence and the interpretation of how AI may support or undermine the goals of nursing education.
In nursing education, assessment and feedback are central to the development of clinical judgment and reflective practice. When AI tools generate scores, feedback, or simulated patient responses, transparency supports students’ ability to learn from rationales and supports educators’ accountability for fair evaluation. These concerns align with established nursing education foundations on clinical judgment development and progression in professional competence and simulation-based learning frameworks.
Comments (0)