A five-tier framework for guiding responsible AI use in nursing students’ coursework: A faculty guide

In recent years, artificial intelligence (AI) has emerged as a defining force in higher education, offering both opportunities and challenges for student learning. Research indicates that AI technologies are increasingly integrated into students’ academic activities across disciplines. For example, a 2024 survey by the Digital Education Council found that 85% of students regularly use generative AI in their studies, with ChatGPT being the most widely utilized platform (Digital Education Council, 2025). Similarly, a 2025 Global Survey Data from 109 countries, comprising of 20,000 responses, revealed a widespread use of generative AI tools among students, with more than 70% engaged with ChatGPT (Amoah et al., 2025). In my courses, at the institution, where I previously served as an adjunct faculty member, I have personally observed how these AI technologies are reshaping student workflows. Yet, despite the rapid adoption of AI among students, many universities have yet to establish clear and solid guidelines for integrating AI into teaching and learning in ways that preserve the rigor of practice-focused education while acknowledging students’ inevitable access to these tools outside the classroom (Wang et al., 2024). This gap leaves many academicians, including nurse faculty, uncertain about how to balance ethical use, academic integrity, and meaningful student learning outcomes.

This trend is not temporary or transient but rather an enduring feature of academia that must be acknowledged, confronted, and taken seriously. However, rather than outrightly rejecting AI, nurse faculty should adopt thoughtful and ethical strategies for integrating it into coursework without compromising academic rigor and integrity. To address this challenge, I developed a simple, transparent, and developmentally oriented guideline, that allows for staged engagement with AI while safeguarding core nursing values such as caring, critical thinking, clinical judgment, and professional integrity. Using this guideline, nurse faculty serve as active stewards of student learning by designing appropriate assignment expectations, facilitating ethical reflection on technology, and assessing whether AI-assisted work meets established learning outcomes. Evidence from recent reviews suggests that AI can enhance educational experiences both in nursing and non-nursing students when thoughtfully implemented (Labrague et al., 2025; Singh et al., 2025); however, continuous evaluation and faculty oversight remain indispensable.

While recent research emphasizes AI’s potential to support learning and enhance engagement in nursing education (Ma et al., 2025), some evidence also raises critical concerns (Abou Hashish, et al., 2025). Some researchers argue that overreliance on generative AI may undermine critical thinking, reflective judgment, and professional identity formation in students (De Gagne et al., 2024). Additionally, debates persist regarding academic integrity, with several reports documenting increased cases of plagiarism and unreported AI use in student coursework (Abou Hashish, et al., 2025). Faculty perspectives also vary, with some viewing AI as a necessary pedagogical tool that can reduce barriers faced by second-language learners and support knowledge acquisition (Fountoulakis, 2024), while others fear that poorly regulated use may expand inequalities, privilege students with greater technological literacy, and erode essential skills such as communication and ethical reasoning (Sarıkahya, et al., 2025).

In recent years, AI studies have tended to focus on the potential benefits of AI for learning efficiency, student engagement, or administrative support (Labrague et al., 2025; Singh et al., 2025); however, few provide structured, discipline-specific guidelines to help faculty balance academic integrity with skill development in practice-oriented fields. In nursing education specifically, most discussions have been conceptual, highlighting opportunities for AI in clinical simulation or decision-support systems, but offering little guidance on how students should responsibly integrate AI into routine coursework. Moreover, existing policies are often either overly restrictive, prohibiting AI altogether, or too vague to provide meaningful direction, leaving both students and faculty uncertain and confused.

The five-tier guideline for integrating AI in nursing education progresses from complete unaided student work to sophisticated, critically supervised co-creation with AI tool. In each category, student permissions and responsibilities are succinctly described, along with the specific role of nurse faculty in ensuring that learning objectives are preserved, and professional values are upheld. As presented in Table 1, the framework is intentionally pragmatic, enabling faculty to map categories to course progression, aligning simpler permissions with early coursework and more advanced co-creative tasks.

In this category, AI use is “strictly” prohibited. Nurse faculty designate specific assignments such as reflective writings, in-class clinical reasoning exercises, or Formative Objective Structured Clinical Examination (OCSE) style reflections, where students are expected to demonstrate unassisted reasoning and authentic engagement. Formative OSCE-style reflections are low-stakes, practice-oriented exercises modeled on traditional OSCEs, where students engage in simulated clinical scenarios to apply knowledge, demonstrate reasoning, and reflect on their decision-making without AI assistance (Kelly et al., 2016). For instance, nursing students may be asked to submit a reflection on a patient encounter or clinical simulation and then discuss it in a small group. At this stage, students are reminded that AI-generated reflections cannot substitute for their own cognitive and emotional processing of the scenario. By reserving authentic reflective work for Category A, faculty preserve opportunities for identity formation as caring, accountable professionals. In this category, nurse faculty clearly explains why unassisted work is pedagogically necessary, design assessments that elicit original thinking, and create rubrics that reward depth of reflection and clinical insight rather than polished prose. Additionally, nurse faculty should also model expectations by showing how independent reasoning in classroom tasks translates to readiness for real-world nursing practice.

Category B permits the use of AI for “exploratory purposes” only, often in coursework such as drafting care plans, developing patient teaching guides, constructing concept maps, and brainstorming for group presentations. At this category, student nurses may query AI to generate examples, propose explanations, or clarify difficult concepts; however, they must transform those outputs into original work. A practical example is allowing students to ask AI to list nursing interventions for congestive heart failure. After which they are expected to construct a patient-centered care plan that reflects individualized assessment data and verify AI-generated information against the scholarly literature or any credible sources. Nurse faculty responsibilities include specifying permissible query types, requiring transparent disclosure of AI involvement, and coaching students on how to validate AI outputs for accuracy and relevance. Nurse faculty should provide clear examples of acceptable and unacceptable use. For instance, students may use AI to generate a list of possible differential diagnoses to stimulate clinical reasoning but may not paste AI-generated text directly into a submitted care plan. Submissions should also include a brief disclosure note, such as: "AI was used to generate example explanations and clarify concepts. I verified the information against course materials and rephrased all AI suggestions in my own words." Nurse faculty evaluate whether the student’s final product demonstrates synthesis beyond AI prompts and whether the disclosure statement adequately describes how AI informed the student’s initial thinking.

In Category C, AI is explicitly authorized as a “writing and editing assistant” focused on surface-level mechanics such as grammar, syntax, flow, and clarity. This category applies to coursework including research papers, ethics essays, policy briefs, and discussion board posts. Nurse faculty must clearly define the boundary between language support and substantive intellectual work, ensuring that students use AI to refine expression without outsourcing argumentation. Faculty responsibilities include offering exemplars of acceptable AI-assisted edits, integrating short workshops on academic writing (with attention to second-language learners’ needs), and requiring students to annotate changes suggested by AI when substantial editing occurs. For instance, a student whose draft contains unclear transitions may use AI to generate alternative phrasings, then select, adapt, and justify chosen edits in a brief reflection submitted with the assignment. Students should also include a short submission note documenting their AI use, for example: “AI was used to improve readability and suggest alternative sentence transitions. I reviewed, adapted, and selected the edits that best supported my argument.” In this way, faculty can help reduce language barriers while safeguarding student authorship and critical thought.

In Category D, AI functions as a “research partner” that can summarize literature, suggest search terms, or outline theoretical frameworks, supporting coursework such as literature reviews, annotated bibliographies, evidence-based practice projects, and capstone proposals. At this stage, nurse faculty must emphasize information literacy: students should corroborate AI-generated summaries with primary sources from credible sources or databases, and assignments should require both citation verification and critique of AI-generated syntheses. Faculty responsibilities include teaching verification methods, designing tasks that mandate retrieval from CINAHL, PubMed, or Scopus, and evaluating how students assess outputs generated by AI for currency, relevance, and potential bias. For example, in a nursing research course, students might ask AI for an overview of recent pressure injury prevention strategies, then independently locate and critique three peer-reviewed studies to confirm, challenge, or expand upon the AI’s summary. To promote transparency, students should also include a brief submission note disclosing AI use, such as: “AI (ChatGPT) was used to generate an initial list of search terms and a preliminary literature summary. All sources were independently verified and critiqued using PubMed and or CINAHL.” Through such scaffolding, AI becomes a stimulus for deeper scholarly inquiry rather than a shortcut to secondary summaries.

Category E represents fully integrated, critically supervised co-creation: students may use AI across brainstorming, drafting, and analytic phases, but they must demonstrate rigorous oversight, correction, and extension of AI contributions. Examples of coursework include final projects, comparative analyses, case-based essays, group presentations, and integrative portfolios. Faculty oversight at this stage is multifaceted: faculty must require a succinct “AI use statement” describing the tools used, the prompts submitted, the verification strategies employed, and how the student’s independent reasoning modified or corrected AI outputs. Assessments should privilege original synthesis and clinical insight. For instance, requiring students to submit both an AI-assisted draft and an annotated critique that identifies errors, biases, or omissions uncovered during verification. For instance, in a leadership course comparing nurse staffing models, a student might use AI to draft a comparative framework and then layer on data from course readings, clinical experience, and policy documents. Nurse faculty evaluation then focuses on the student’s ability to cross-examine the AI-generated framework and present nuanced, evidence-based conclusions. At this category, institutional expectations and ethical principles, including human oversight and transparency converge, requiring faculty to adjudicate complex questions of authorship and accountability. Professional and ethical frameworks from international and national bodies further reinforce the importance of maintaining transparency and human judgment in AI deployment (Galiana et al., 2024).

Comments (0)

No login
gif