Students who utilized the VIPP felt the program prepared them well for virtual residency interviews, and the majority would recommend the use of the VIPP to future classes in their preparation. Given the importance of the interview during the residency application process, relevant and practical tools are needed to ensure student success. The results of this study indicate that the use of the VIPP is a potentially useful tool in preparing fourth-year medical students for virtual interviews for residency.
Interviews can generate significant anxiety for students, and practice is one way to mitigate it [12]. Students who participate in practice interviews feel better prepared for authentic interviews, more confident, and believe the practice helps improve their skills [4, 13]. Having the ability to practice interviews in a video format would be beneficial in today’s virtual interview climate [14]. Research has shown that in preparation for virtual interviews, students require the most help in their preparation with the delivery of content and video technical quality [15].
Mock interviews can offer feedback in these specific areas, which allows students to make adjustments prior to official interview season. Addressing concerns such as video quality can not only aid in preparation for interviews but can also help students present themselves appropriately for virtual open houses and “Zoom happy hours” which many programs have adopted [16]. Additionally, mock interviews can allow students to identify commonly asked questions and work on their answers. At our institution, mock interviews were inconsistently performed and lacked standardization. VIPP offers a more uniform and accessible format, though its effectiveness compared to traditional mock interviews remains uncertain. Comparing virtual mock interviews to in-person mock interviews is an area of future study.
Compared to in-person mock interviews, VIPP offers scalable, flexible, and self-directed practice with automated feedback. In addition, the VIPP can add to this benefit by allowing advisors to track what questions were the most troublesome for their students. Students in our study had the most difficulty answering questions relating to personal failure, mistakes, stressful experiences, challenging patients, and missed deadlines. Though the scoring from VIPP is not transparent it highlights specific coaching points for students in preparation for interviews. Having the opportunity to practice and receive feedback regarding their responses to these questions may assist them in preparing for real interviews.
It is known that students often seek out information outside of school-sanctioned venues. Survey respondents indicated that the resources they used in addition to the VIPP included YouTube, Reddit, and Google. These resources lack transparency about what questions students are viewing and do not provide active feedback as to how they perform on questions.
The use of VIPP as a tool allows students to evaluate their performance with the virtual format and provides medical schools with the opportunity to evaluate areas for improvement. While the intent of the VIPP is to alleviate student anxiety regarding virtual interview preparation, we found it interesting that one respondent to our survey felt more stress when using the program. This may be worthy of further research as the program is integrated into medical schools and advising plans.
Integration of rapidly developing systems, such as AI, requires thoughtful deployment. As technology continues to evolve exponentially, it is important to be mindful of its adoption into medical education. One consideration that was emphasized by student respondents was the desire for more personalized feedback. Similarly, a study amongst pharmacy students also felt the limitation of authenticity with VIPP [13]. The use of new technologies invariably requires assistance and guidance on usage. The future use of the program may be most beneficial as an adjunct to personalized feedback with lead advisors or specialty mentors rather than being a replacement for these tools.
Strengths of the study include its large sample size of students who utilized the program. IUSM is the largest allopathic medical school in the United States, and our results therefore may be generalizable to a larger population [17]. Furthermore, the program was provided to all students free of cost, minimizing inequities between students. The web-based format of the program allows students from across the state to participate and practice irrespective of their geographic location or clerkship assignment, further expanding the generalizability of the results. The use of both quantitative and qualitative data adds to the strength of the study and future implications.
Our study is not without limitations. Though access was provided to the entire school, not all students took advantage of the resource. As a result, the findings may disproportionately reflect the views of students who were more motivated or interested in responding. One of the primary concerns regarding new technology in the medical education space is the potential to exacerbate inequities. To minimize this in our study, it was important to provide resources to all students free of charge. Additionally, while survey responses were overwhelmingly positive, our low survey response rate (14%, N = 29) introduces selection bias and may limit its generalizability.
In our methodology, ChatGPT was used to summarize themes in the data. ChatGPT can function as a valuable tool during analysis, enhancing the efficiency of the thematic analysis and offering additional insights into the qualitative data. The benefits of thematic summary are convenience and speed. It is not without limitations though. There can be loss of nuances, risk of bias, and lower interpretive depth [18].
In addition, responses were voluntary. This is a single-site study, which remains an inherent limit to generalizability outside of a single institution. In addition, the VIPP scoring rubric is not transparent, which limits the interpretation of the performance data. While students felt that the scoring provided by the AI was helpful, there was a lack of clarity in how the score is derived. It may be best to use VIPP in addition to feedback from other mentors and advisors. Students are still encouraged to click the token for formative feedback on their virtual interview performance. Lastly, we do not have real-world outcome data as our study did not provide information about individual performances in actual residency interviews in comparison to VIPP practice. It is unknown whether increased usage of the program is correlated with improved interview performance or match ranking.
Comments (0)