key: cord-0739760-jgdmqpdk authors: Kulasegaram, Kulamakan; Baxan, Victorina; Giannone, Elicia; Latter, David; Hanson, Mark D. title: Adapting the Admissions Interview During COVID-19: A Comparison of In-Person and Video-Based Interview Validity Evidence date: 2022-01-26 journal: Acad Med DOI: 10.1097/acm.0000000000004331 sha: 551a327b75321b5ddb77cf0297fdf6331fb5ce5a doc_id: 739760 cord_uid: jgdmqpdk COVID-19 physical distancing limited many medical schools’ abilities to conduct in-person interviews for the 2020 admissions cycle. The University of Toronto (U of T) Temerty Faculty of Medicine was already in the midst of its interview process, with two-thirds of applicants having completed the in-person modified personal interview (MPI). As the university and surrounding region were shut down, the shift was made in the middle of the application cycle to a semisynchronous video-based MPI interview (vMPI) approach. U of T undertook the development, deployment, and evaluation of the 2 approaches mid-admissions cycle. Existing resources and tools were used to create a tailored interview process with the assistance of applicants. The vMPI was similar in content and process to the MPI: a 4-station interview with each station mapped to attributes relevant to medical school success. Instead of live interviews, applicants recorded 5-minute responses to questions for each station using their own hardware. These responses were later assessed by raters asynchronously. Out of 627 applicants, 232 applicants completed the vMPI. Validity evidence was generated for the vMPI and compared with the MPI on the internal structure, relationship to other variables, and consequential validity, including applicant and interviewer acceptability. Overall, the vMPI demonstrated similar reliability and factor structure to the MPI. As with the MPI, applicant performance was predicted by nonacademic screening tools but not academic measures. Applicants’ acceptance of the vMPI was positive. Most interviewers found the vMPI to be acceptable and reported confidence in their ratings. Continuing physical distancing concerns will require multiple options for admissions committees to select medical students. The vMPI is an example of a customized approach that schools can implement and may have advantages for selection beyond the COVID-19 pandemic. Future evaluation will examine additional validity evidence for the tool. Abstract COVID-19 physical distancing limited many medical schools' abilities to conduct in-person interviews for the 2020 admissions cycle. The University of Toronto (U of T) Temerty Faculty of Medicine was already in the midst of its interview process, with two-thirds of applicants having completed the in-person modified personal interview (MPI). As the university and surrounding region were shut down, the shift was made in the middle of the application cycle to a semisynchronous videobased MPI interview (vMPI) approach. U of T undertook the development, deployment, and evaluation of the 2 approaches mid-admissions cycle. Existing resources and tools were used to create a tailored interview process with the assistance of applicants. The vMPI was similar in content and process to the MPI: a 4-station interview with each station mapped to attributes relevant to medical school success. Instead of live interviews, applicants recorded 5-minute responses to questions for each station using their own hardware. These responses were later assessed by raters asynchronously. Out of 627 applicants, 232 applicants completed the vMPI. Validity evidence was generated for the vMPI and compared with the MPI on the internal structure, relationship to other variables, and consequential validity, including applicant and interviewer acceptability. Overall, the vMPI demonstrated similar reliability and factor structure to the MPI. As with the MPI, applicant performance was predicted by nonacademic screening tools but not academic measures. Applicants' acceptance of the vMPI was positive. Most interviewers found the vMPI to be acceptable and reported confidence in their ratings. Continuing physical distancing concerns will require multiple options for admissions committees to select medical students. The vMPI is an example of a customized approach that schools can implement and may have advantages for selection beyond the COVID-19 pandemic. Future evaluation will examine additional validity evidence for the tool. Live, face-to-face interviews of applicants remain an important part of the undergraduate MD admissions pathway. When conducted using evidence-informed principles and best practices, interviews can effectively assess the personal characteristics of applicants. 1,2 At the same time, interviews also offer an opportunity to recruit highly sought out applicants as well as to serve the social accountability mandates of schools that seek to broaden representation from minority populations. 3 Above all, interviews are an opportunity to provide a human touch to the admissions process and engage with applicants, albeit under stressful circumstances. 4 However, the COVID-19 pandemic that occurred in the midst of many medical schools' application cycles has had a dramatic impact on the application process, including the viability of live, face-to-face interviews. The University of Toronto (U of T) was in the midst of the interview stage of admissions when a provincial state of emergency and shutdown was declared in mid-March 2020. In typical years, U of T uses the modified personal interview (MPI), a compromise approach between a traditional holistic personal interview and the multiple mini-interview. 5, 6 The MPI consists of 4 independent semistructured consecutive interviews, each lasting 12 minutes and focusing on the personal experiences, important admissions attributes, and reflections of applicants. Each interview is led by a single interviewer, allowing for oneon-one interaction. There are short pauses in between each interview, and the total time of the interview cycle is approximately an hour. The interviewers include current U of T medical faculty, postgraduate resident trainees, fourth-year medical students, other health care professionals, and community members. The MPI is a balance of the recruitment and personalized aspects of interviewing with best practices of measurement and resourcing. Validity evidence for the MPI has shown acceptable reliability, prediction of future performance, acceptability by candidates, and resource efficiency. 5, 6 Typically, U of T would interview over 600 candidates over 6 weekend days after a file review period. Due to the COVID-19 shutdown (March 17, 2020), only 4 of those in-person interview days had been completed, with the remaining candidates invited for interview but unable to physically attend. In response, U of T opted to quickly innovate and develop an online semisynchronous interview format (i.e., interviews are conducted synchronously, whereas responses are not evaluated in real time) for the balance of invited interviewees. Electronic interviewing has been increasingly adopted by many organizations, though the implications of this approach are only now being discussed in regard to medical school admissions. 3 Some medical education organizations have already adopted video-based approaches such as the Association of American Medical Colleges (AAMC) standardized video interview for residency. [7] [8] [9] [10] Below, we describe our path of innovation and evaluation of an in-house, semisynchronous, video-based replacement for the MPI known as the video MPI (vMPI) in the context of undergraduate medicine admissions. This approach allowed us to preserve some of the important aspects of interviews while adapting to the physical isolation and governmental travel restrictions necessary to preserve public health. Moreover, we were able to compare the measurement properties and utility of the vMPI to the MPI within the same cycle under an authentic admissions context. Specifically, we compared the validity evidence 11 for internal structure, relationship to other variables, and consequential validity of the vMPI to that of the MPI. The COVID-19 shut down occurred as of March 17, 2020, 12 necessitating cancelation of the planned U of T MPI dates of March 28 and March 29, 2020. The medical school admissions offer date for all Ontario medical schools was May 12, 2020, which remained the same despite the COVID-19 shutdown. Our vMPI innovation and development window was therefore only 2 weeks. First, options for conducting the MPI as multiple virtual interviews were explored. 10, 13 We identified that a real-time synchronous interview format posed significant challenges to a multiple interview format as technical interruptions, delays, and possible service outages would hamper effective applicant responses and evaluation. Furthermore, under COVID-19 physical distancing restrictions, organizing a large group of health professional interviewers (our MPI interviewers are often physicians, fellows, residents and medical students, and community members) for a single block of time during the pandemic would not have been feasible. Other potential tools such as situational judgment tests and commercial options were also not viable given the short time frame and would not have aligned with publicly stated policies. An asynchronous approach to the MPI was chosen. Applicants would record their responses within a limited time frame to each of the individual vMPI interview questions. These responses would then be uploaded to a secure U of T server and evaluated at a later time. While this approach eliminates the interaction and spontaneity of live, face-to-face interviews, it provided a structured format using multiple independent sampling without imposing additional rater requirements. We aimed to keep the vMPI as close as possible to the MPI's constructs and design. For detailed information about the MPI, please see Hanson and colleagues 5 and Hanson and colleagues. 6 Both formats consist of a 4-station track with each station mapped to 1 of 4 key attributes (self-reflection, ethics, collaboration, and values) valued for physician training. In the MPI, each interview consisted of a 12-minute face-to-face interaction with a single interviewer evaluated on 4 attributes (maturity, communication, conscientiousness, and a station-specific attribute). Interviews ask standardized questions to open each interview but also have suggested prompts and some flexibility to personalize the interview. At the end of the interview, the interviewer evaluated the candidate on 4 items evaluating key attributes relevant for admissions using a 7-point global rating scale, with 1 indicating a "poor" response, 4 indicating "very good, " and 7 a "superb" response. Each MPI is scored out of a total of 28, with possible scores ranging from 4 to 28. The overall structure for the vMPI was very similar, with 2 notable exceptions. First, applicants recorded their responses to 4 interviews and uploaded them for assessment by raters later. Each interview had only 1 interview question. Applicants had 30 minutes to prepare, record, and upload their response (3-4 minutes to prepare their response, time to record the 5-minute video response, 15 minutes to save and complete the upload of the recording to the U of T server, and 5 minutes to prepare for the next interview question). The next question began 30 minutes after the start of the previous question. No probing questions were possible in this asynchronous format. Second, applicants were only evaluated on communication/maturity and the stationspecific item as the other items typical of in-person MPIs was deemed difficult to assess. Moreover, high interitem correlation across MPI stations made this a defensible decision. 5 The same 7-point global rating scales and anchors were used for the vMPI, with higher scores indicating better performance. These scores were normalized to out of 28 for comparison with the MPI. After recording responses to each of the 4 interviews, the vMPI was complete for the applicant. Each vMPI interview was scored out of 14 and a total score calculated and normalized for entry into the admissions ranking pool. Applicants used their own technology with recording capabilities (camera and microphone) to record their responses. We recommended recording their responses on a laptop or desktop for easy transmission via the shared link. Through technology testing, we identified that it was best to avoid transferring files by email. Due to the size of the file, this could have taken a long time to send or receive. We provided specific details on the process in Supplemental Digital Appendix 1, at http://links.lww.com/ ACADMED/B164. A secure file-sharing platform, ShareFile (Citrix Systems, Inc., Raleigh, NC), was used to collect applicants' prerecorded video responses. This platform requires administrative users to login using a 2-factor authentication and therefore was selected for its advanced security features. All collected responses were then hosted on the MD program's private Vimeo account (Vimeo.com Inc., New York, New York) to generate individualized links for each video. The videos were not searchable nor indexed on public search domains. All videos on the Vimeo account were permanently deleted after all of the clips had been backed up on a private, secure storage and fully rated. To ensure that raters could confidentially access their assigned videos, all interview clips links were organized and posted on a dedicated learning management system. Rating forms were created for individual raters using the MD program's Qualtrics survey platform account (Qualtrics, Provo, Utah). communication regarding the change from a live to a video interview format on March 13, 2020. They were informed that they would have a chance to test the technology before official interviews and that they could request a letter of support to assist them with travel cancelations. On March 17, 2020, the date of the official announcement confirming the closure of the university, applicants received a second communication providing additional details about the vMPIs, including format, number of interviews, rater information, technology testing, and projected length of the vMPI. Applicants were sent technology testing instructions on March 20, 2020, and were given 4 days to complete their technology checks. The technology testing communication provided details regarding the start and end time of the vMPI, the method by which applicants would receive the interview questions, recording and uploading time, video production recommendations, and step-by-step instructions for recording and uploading their video responses to a secure U of T site (see Supplemental Digital Appendix 1, at http://links.lww.com/ACADMED/ B164). Applicants were able to test recording videos as many times as they wished to but were asked to upload only 1 test video to the U of T secure system. Piloting was completed by 12 enrollment service admission team members on March 19, 2020. Following technology testing, the admissions team offered 2 town hall meetings on March 25, 2020: 1 each morning and afternoon session, factoring in the time difference for applicants' location, which was recorded and shared. The official interview started at 9:00 am EDT time on each of the days. The 4 questions were sent in 4 separate and sequential emails, and applicants had to make 4 separate uploads: • Applicants received an email with the first question; • They were asked to record a 5-minute response to each question; • Following the recording, they had approximately 15 minutes to save and upload their recording via the secure web link included in the email; • The same steps were followed for questions 2, 3, and 4 with the same time frame provided. Applicants were asked to set aside 4 hours for the online interview cycle to allow ample time to complete the interview. If applicants had urgent questions or experienced any difficulties during the interview process, they were encouraged to email the Enrolment Services team where a staff member monitored inquiries during and following the interview. After completing the vMPI, applicants were sent an email confirming the end of the interview day, as well as reminders about what to do if they encountered technical difficulties and what to expect as next steps. The same pool of raters selected for the MPI were used for the vMPI. Raters were given standard interview training consisting of online modules with additional new training materials developed and contextualized to the vMPI as well as the specific station they were asked to evaluate. In preparation for the MPI, raters attend a 30-minute orientation session on the morning of interviews. This session was modified and recorded as a narrated video presentation and provided to raters through the online training module. Each rater would score a single vMPI station so as to preserve multiple independent sampling. Raters were able to view videos in a secure learning management system module (Quercus/ Canvas, Salt Lake City, Utah; Instructure Inc.) and record their ratings using an online form delivered via Qualtrics. Additionally, we asked raters to answer 2 queries through 5-point Likert scales: if the video format interfered with their evaluation and their level of confidence in their ratings of each applicant. Raters were able to review each video as well as review and change their scores as many times as they liked before submitting their final rating form. We compared the MPI and vMPI on 3 important factors related to the sources of validity evidence: internal structure (reliability and factor structure), relationship to other variables (concurrent validity), and consequences (acceptability of the vMPI and impact on the admissions process). Internal structure was analyzed using generalizability analyses of reliability as well as exploratory and confirmatory factor analysis (CFA) of the MPI and vMPI. Relationship to other variables was explored using correlations with personal characteristics and academic assessment subscores from the file review that serves the selection process leading to the final interview stage. Consequences were studied using anonymized applicant and rater satisfaction surveys using 5-point Likert scale items. We also report raters' aggregate confidence in the vMPI, which speaks to response process. Finally, we discuss how vMPI and MPI scores were integrated for making final admissions decisions and admissions offers. All statistical analyses were conducted using AMOS and SPSS statistical software, version 23 (IBM, Armonk, New York), and urGENOVA (Brennan, Iowa City, Iowa). This data collection and analyses were considered quality assurance at U of T; accordingly, we were exempted from review by the ethics review board. Preinterview admissions information for both cohorts is presented in Table 1 . A total of 232 applicants (37%) of the entire interview pool completed the vMPI across 2 days (March 28 and 29), for a total of 928 vMPIs. These were assessed by 80 raters, with each station evaluated by a single independent rater. Some raters completed multiple stations. The majority of vMPI applicants (>97%) were able to record and upload their videos in a single attempt. There were no major significant differences between the mean file review scores used to select for interview between the MPI and vMPI cohort. of the other admissions variables reached the threshold for significance. Reliability was analyzed using generalizability theory, with facets of generalization being interview type (ethics, values, etc.) and items. The facet of differentiation was applicant, nested in track, nested in day. The results are reported in Table 2 and show similar reliability for both formats. A further validity expectation was that if the vMPI assessed similar characteristics as did the MPI, supporting evidence would include a similar factor structure for both interview formats. Previous exploratory factor analysis (EFA) of MPI data was used to generate a factor structure for the CFA. The EFA for previous analyses had a Kaiser-Meyer-Olkin criterion > 0.6, indicating acceptable sampling. Using maximum likelihood extraction and the Eigenvalue criteria, analyses suggested a single factor was optimal. We tested this using a CFA that compared a 4-factor model (each station is its own factor) versus a single-factor model. For both the MPI and vMPI, a single-factor model had better fit with the root mean square of approximation being less than 0.06 for the single-factor model. The MPI and vMPI are intended to measure personal characteristics as opposed to academic measures (i.e., GPA). At U of T, applicants first undergo file screening, including an academic score consisting of their GPA (or weighted GPA, if applicable), MCAT threshold, and special circumstances. Next, a detail file review is conducted before interview and a rubric-based nonacademic score generated by a rater-based assessment of their personal statements (brief personal essays), list of extracurricular activities (autobiographical sketch and statements), and references (confidential assessment forms). As expected, both the MPI total score and vMPI total score were positively and significantly correlated with the nonacademic subscore, with the MPI at r = 0.22 and vMPI at r = 0.20. Correction for restriction of range increased this correlation to 0.29 for the MPI and 0.27 for the vMPI. Both tools had no significant correlation with GPA and academic performance with coefficients essentially equaling 0. Eighty-six applicants completed a post vMPI satisfaction survey (37% response rate). Overwhelmingly, they expressed high levels of satisfaction with the process (see Figure 1 ). Only 1 question was similar across surveys sent to the vMPI and MPI groups: "The interview assessed skills and attitudes relevant for selecting medical students. " While 89% (n = 76) said somewhat agree, agree, or strongly agree for the vMPI, over 97% (n = 194) of MPI applicants responded similarly. Raters in the vMPI pool expressed high levels of satisfaction with the process. For 86% of the videos (n = 798/928), raters (n = 80) said that the video and technical quality had no impact on their ability to rate, though reported minimal impact on 12% of the videos (n = 111). Most importantly, for over 94% (n = 872) of the videos, raters stated they were definitely or fairly confident in their ability to assess the applicants. Given the use of 2 different interviewing approaches in a single admissions cycle, the admissions committee evaluated the psychometric properties of both approaches and reviewed the accumulated validity evidence. Because the MPI typically makes up 50% of an applicant's final score used for ranking, the vMPI was also weighted similarly. The final offer list of applicants was created to proportionally represent the cohort of applicants interviewed by the vMPI (i.e., ~37% of offers were made to vMPI applicants). However, this final list had a large overlap (>98%) with the hypothetical list assembled by pooling both cohorts and ignoring the type of interview. In the midst of the COVID-19 pandemic, we demonstrated that a bespoke and in-house asynchronous video-based interview had adequate validity evidence and was comparable to the live face-toface interview. Our findings add to the growing literature on virtual interviewing solutions. 7-10 These solutions are still in their infancy for undergraduate admissions but are possible with leveraging the resources brought forward by applicants and the institution. The strength of the vMPI lies in its use of best practices and evidence in admissions. In retrospect, our approach aligned with the AAMC's guidelines and recommendations for best practices 14 and approaches piloted elsewhere. [7] [8] [9] The validity evidence for the tool emerges from use of multiple independent sampling, adequate rater training, and building on an existing admissions framework. The vMPI leveraged existing technology resources with reasonable requirements on the part of applicants. While socioeconomic concerns are an issue, the vast majority of U of T vMPI applicants successfully completed the interview process. Going forward, virtual interviews not only maintain physical distance but may be a less expensive option given the potential travel and accommodation costs for some applicants. 3 A great part of the success of the vMPI was due to the communications strategy and support offered by the school. While this process can be labor intensive, it may be a solution for schools that cannot afford a commercial selection solution. With possible continuance of social distancing and travel restrictions, options such as the vMPI can be implemented. The results of our local implementation and evaluation provide a framework for other programs to adapt their own processes as well as an approach to evaluate their implementation. Specifically, we encourage all schools to begin with a validity perspective on their admissions tools. Validity as a starting point for evaluating admissions processes provides a road map for fair and defensible decision making. Moreover, as we have demonstrated, a validity perspective creates room for integration of issues of institutional values, acceptability of stakeholders, and feasibility considerations. While many schools do not use an MPI, similar multiple independent sampling formats are widely used in North America 2 including multiple mini interviews and other shorter variants 15 akin to the MPI. Moreover, many schools and smaller residency programs were already considering the transition to online intervews. 16, 17 Our specific implementation and evaluation steps can be employed in both large and small programs with limited interviewer resources. The steps we outline highlight how schools can engage applicants, ensure quality assurance of changes, and identify the impact of the transition to virtual interviews. Additionally, our findings strongly suggest that an asynchronous approach can have future utility for selection decisions. Virtual interviews pose many advantages including lowering applicant travel costs. As North American schools enter future rounds of admissions, the greater the number of options available to admissions committees, the more likely the admissions process can meet the needs of all stakeholders. Beyond the COVID-19 pandemic, in-person interviews may remain an important part of admissions interviewing as other factors such as recruitment, exposure to facilities/ campus, and face-to-face interaction remain valuable within admissions. The main advantage of video-based interviewing may be in enhancing existing admissions processes, such as screening for interviews. Measures of the so-called noncognitive factors used in file review or preinterview screening for the most part have poor validity evidence. While situational judgment tests have shown promise, 18 virtual interviews may also play a role in screening as they offer a proxy for face-to-face interaction and multiple sampling of applicant performance. More tailored tools such as the vMPI may enable schools to better reflect their mission and values. Additionally, virtual interviews may expand geographic and sociodemographic access for schools. The cost of travel to and from interviews can be a serious expense for disadvantaged applicants. The vMPI may provide an interview alternative for U of T and other medical schools active in recruiting international applicants by eliminating the high cost of international travel for this applicant subgroup. Still, we have shown, applicants are less ready than interviewers to accept virtual formats as the sole form of interviewing. Although our vMPI cohort was positive, they were less so than the cohort that experienced the MPI. Greater transparency and communication of the advantages of the virtual format and ensuring that the interactive and human element of the admissions process is retained will be an ongoing challenge. 3 Limited interactivity may also curtail the ability of interviews to assess important attributes including interpersonal interactions. Better understanding what asynchronous interviews can assess-and what they cannot-is still an ongoing area of investigation for medical school admissions. The main limitation of our evaluation is the lack of evidence showing predictive validity. Previous studies in medical education have shown that interviews can and do predict future performance. 1,2,5 Future research must examine this gap in the validity evidence, preferably against the same outcomes the MPI and other interviews have been validated against. The vMPI will continue to be used at U of T for the duration of physical distancing and the COVID-19 pandemic. The vMPI was an innovative stop-gap solution to the admissions problem confronting U of T. While it shows preliminary evidence of validity and comparability to the MPI, future evaluations will examine predictive validity and other issues such as sustainability. Innovations such as the vMPI need to be shared and studied as the medical school admissions community responds to both the COVID-19 pandemic and the changing technology landscape of admissions. Association between a medical school admission process using the multiple mini-interview and national licensing examination scores Do multiple mini-interview and traditional interview scores differ in their associations with acceptance offers within and across five California medical schools A reflection upon the impact of early 21st-century technological innovations on medical school admissions Assessing personal qualities in medical school admissions Multiple independent sampling within medical school admission interviewing: An "intermediate approach Modified personal interviews: Resurrecting reliable personal interviews for admissions Innovation in residency selection: The AAMC Standardized Video Interview The AAMC Standardized Video Interview and the electronic Standardized Letter of Evaluation in emergency medicine: A comparison of performance characteristics The AAMC Standardized Video Interview: Reactions and use by residency programs during the 2018 application cycle Internet-based multiple mini-interviews for candidate selection for graduate entry programmes National Council on Measurement in Education. Standards for Educational and Psychological Testing COVID 19: Designing and conducting an online mini-multiple interview (MMI) in a dynamic landscape Conducting interviews during the coronavirus pandemic Reliability and acceptability of a five-station multiple mini-interview model for residency program recruitment Randomized evaluation of a web based interview process for urology resident selection Residency interview video conferencing an online pre-interview screen for personal/professional characteristics: Prediction of national licensure scores