key: cord-0982352-vpsjjtu2 authors: O'Connor, Enda; Doyle, Evin title: A Scoping Review of Assessment Methods Following Undergraduate Clinical Placements in Anesthesia and Intensive Care Medicine date: 2022-04-05 journal: Front Med (Lausanne) DOI: 10.3389/fmed.2022.871515 sha: 041969fa3c9af43203cbc7e3c087265dcc95489e doc_id: 982352 cord_uid: vpsjjtu2 INTRODUCTION: Anesthesia and intensive care medicine are relatively new undergraduate medical placements. Both present unique learning opportunities and educational challenges to trainers and medical students. In the context of ongoing advances in medical education assessment and the importance of robust assessment methods, our scoping review sought to describe current research around medical student assessment after anesthesia and intensive care placements. METHODS: Following Levac's 6 step scoping review guide, we searched PubMed, EMBASE, EBSCO, SCOPUS, and Web of Science from 1980 to August 2021, including English-language original articles describing assessment after undergraduate medical placements in anesthesia and intensive care medicine. Results were reported in accordance with PRISMA scoping review guidelines. RESULTS: Nineteen articles published between 1983 and 2021 were selected for detailed review, with a mean of 119 participants and a median placement duration of 4 weeks. The most common assessment tools used were multiple-choice questions (7 studies), written assessment (6 studies) and simulation (6 studies). Seven studies used more than one assessment tool. All pre-/post-test studies showed an improvement in learning outcomes following clinical placements. No studies used workplace-based assessments or entrustable professional activities. One study included an account of theoretical considerations in study design. DISCUSSION: A diverse range of evidence-based assessment tools have been used in undergraduate medical assessment after anesthesia and intensive care placements. There is little evidence that recent developments in workplace assessment, entrustable activities and programmatic assessment have translated to undergraduate anesthesia or intensive care practice. This represents an area for further research as well as for curricular and assessment developments. The inclusion of anesthesia and intensive care medicine (ICM) in undergraduate medical student placements is a relatively new development (1) . Recent publications have sought to define suitable curricula in these disciplines (2, 3) . With expanding placement opportunities come an ever-increasing obligation to ensure that student learning is effective and efficient, that student time is "well-spent, " and that we "maximize assessment for learning while at the same time arriving at robust decisions about learner's progress" (4) . The ICU and the anesthetic room can be challenging areas for student learning. Opportunities for history-taking and clinical examination are variable (1, 5) . Patients undergoing anesthesia require a focused history and examination tailored to the upcoming anesthetic and surgical procedure (5) . ICM patients are commonly sedated and/or confused, impeding history-taking. Clinical examination in the intensive care unit (ICU) is more challenging in the context of an immobile, unresponsive patient on extracorporeal devices (dialysis, mechanical ventilation). Furthermore, in both disciplines, procedural learning is often limited by the complex, high-stakes, time-sensitive aspects of common tasks (1, 5) . Conversely, anesthesia and ICM share learning opportunities not readily available during other placements. They are ideal environments for the vertical integration of primary and clinical sciences (6) . Many learning topics are unique to these disciplines (e.g., acute respiratory distress syndrome, clinical brainstem death evaluation, inhalation anesthesia, pharmacological neuromuscular blockade). Other key elements of their curricula (e.g., the management of acute respiratory failure, shock, acute airway emergencies, sedation administration) are generic, high-stakes, transferrable clinical skills that could be viewed as important competencies for all doctors. Current evidence suggests that ICM and anesthesia placements can achieve effective student learning outcomes (7) . Nonetheless, the unique nature of their curricula may require a bespoke approach to learner assessment. Furthermore, valid and reliable tools are central to assessment decisions regarding high stakes competencies such as effective acute and perioperative patient care. Despite this, three recent papers on curriculum and effective teaching in the ICU and anesthetic room make no recommendations about student assessment (2, 3, 5) . The first objective of our review therefore was to evaluate the nature and robustness of published assessment strategies in these high-stakes clinical specialties, incorporating an analysis of the theoretical bases for these publications. The expansion of undergraduate anesthesia and ICM placements has occurred contemporaneously with an evolution in medical education assessment. Accordingly, practice is moving away from evaluating low-level cognitive learning objectives such as knowledge and understanding (using MCQs, written examinations) toward knowledge application (using extended matching questions, OSCEs) and most recently to clinical performance, either in a simulated or workplace environment (8) (9) (10) (11) . Furthermore, longitudinal methods such as programmatic assessment have in recent years gained in popularity (12) . The extent to which this evolution has translated to assessment in undergraduate anesthesia and ICM education was the second objective of our review. A scoping review methodology was used for two reasons. First, the authors had prior knowledge of the research topic and recognized that the range of published literature was unlikely to yield research of sufficient quality to enable a systematic review or a meta-analysis. Second, in light of recent advances in assessment practice, we anticipated a knowledge and/or research gap in the areas of anesthesia and ICM assessment. Our study methodology therefore needed to be tailored to identifying these gaps were they to exist (13, 14) . We used the 6-step adaptation of Arksey and O'Malley's (15) scoping review framework as proposed by Levac et al. (13) . These steps are (1) identifying research questions, (2) identifying relevant articles, (3) study selection, (4) charting the data, (5) collating, summarizing, and reporting the results and (6) consulting with stakeholders. In addition, we applied a scoping review quality checklist to enhance the rigor of our findings (16) . The overarching purposes of our scoping review were (a) to describe the nature of existing research about undergraduate assessment after anesthesia/ICM placements and (b) to identify research gaps in this area. The following six steps were applied. The scoping research questions were: 1. What methods and practices of assessment have been reported in the literature for students undertaking clinical placements in anesthesia and/or intensive care medicine? 2. What educational theories have been articulated for the assessment methods published in the literature? Using five online databases (EMBASE, SCOPUS, EBSCO, PubMed, Web of Science) we conducted a search for all available papers from January 1980 to 31/12/2020, using the search terms "medical student, " and several variations on "anesthesia" and "intensive care" to account for differences in regional terminology (see Figure 1) . A librarian was used to assist with accessing articles. Reference lists of relevant articles were also included in the search. Due to the pandemic, the high intensive care and anesthesia workload in early 2021 led to a reallocation of research to clinical time, delaying the completion of the scoping review. Accordingly, a further search was performed up to 31/08/2021. To be included, studies had to describe original research using an assessment tool following an undergraduate placement in anesthesia and/or intensive care medicine. Studies were excluded if they were not published in English, if they enrolled postgraduate or non-medical learners, or if the assessment followed a standalone courses rather than a clinical placement. The three stages of study selection were based on title, abstract, and full-text searches respectively. Mendeley© software was used. The two authors independently screened the publications for study inclusion. A Cohen's kappa coefficient was calculated to quantify author agreement at each stage of study selection. The authors met regularly to discuss and resolve any disagreements about study inclusion. The number of studies included in each stage of selection, and the exclusion criteria are shown in the flowchart in Figure 1 . Evaluating each study involved a combination of numerical description and general thematic analysis. For the former, the following information was extracted from each article: lead author; country of authorship; journal title; the year of publication; study design; number of research sites; sample size; assessment tools used; quantitative outcomes; Miller's learning outcomes (17) ; MERQSI score (18) . Through thematic analysis, other details about the studies were recorded, including qualitative outcomes, important author's quotes, theoretical considerations and any insights pertinent to the research area. In accordance with Levac et al. (13) scoping review methodology, the 2 authors met following data extraction of the first 6 articles to determine whether the approach was "consistent with the research question and purpose" (page 4). Study authors were contacted directly if further information or clarification about their findings were deemed appropriate. The information drawn from each article was summarized and tabulated (see Table 1 ). Consultation was undertaken via email with 3 stakeholders involved in undergraduate education and/or anaesthesiology/intensive care medicine teaching, each working in a different academic institution. Preliminary study results were shared with them. The purpose of the consultation was to seek opinions about any omitted sources of study information, to gain additional perspectives on the study topics, and to invite opinions about the study findings. A total of 2,435 results were returned from the initial search between the 5 databases, of which 17, published between 1983 and 2020, were selected for full-text review (6, (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) 36) . The second search performed in August 2021 returned 2 further studies published in 2021 (19, 35) . The findings of these 19 studies are shown in Table 1 . Of the 19 studies, 9 (47.4%) involved anesthesia (21, 23-25, 28-32), 8 (42.1%) intensive care medicine (6, 19, 20, 26, 27, (33) (34) (35) and 2 (10.5%) a combination of both disciplines (22, 36) . The primary research focus was on the assessment instrument and on student learning in 9 (22-26, 28-30, 36) and 7 (19-21, 27, 32, 33, 35) studies respectively. The remaining 3 studies had equal research focus on learning and the assessment tool. All were single-center studies, 14 of which (73.7%) were conducted in Canada, USA and Hong Kong. The average sample size across all studies was 119 students (range 5-466). Clinical placements lasted 2-12 weeks, with a median duration of 4 weeks. Fourteen studies (73.7%) had 2 or 4 week placements. Twelve of 19 studies (63.2%) had a non-randomized design and collected assessment data at one timepoint only (20-25, 28-32, 36) . Conversely, the remaining 7 studies (36.8%) were either RCTs and/or had a pre-/post-test study design (6, 19, 26, 27, (33) (34) (35) . Despite all studies reporting solely or mainly quantitative data, none conducted a power analysis to evaluate the required sample size. Ten studies (52.6%) considered the issues of the reliability and/or validity of their assessment tools (6, 21-26, 28, 30, 31) . Two additional studies used standardized assessment questions from the Society of Critical Care Medicine and the American College of Physicians (20, 35) . The average MERQSI score was 12.3 (range 5-15) out of a maximum of 18. A wide variety of student assessment tools were used across the 19 included studies. These are shown in Table 2 . The most common methods were multiple choice questions (7 studies; 36.8%) (19, 22, 26, 27, 29, 32, 35) , written assessment (6 studies; 31.6%) (20, 22, 23, 29, 32, 34) and simulation (6 studies; 31.6%) (21, 23, 25, 26, 30, 31) . Seven studies (36.8%) used a combination of more than one assessment tool (22, 23, 25, 26, 29, 30, 32) , which in 3 studies included final end-of-year examinations (22, 25, 30) . All 7 studies with a pre-/post-test design showed an improvement in assessment outcomes after clinical placements. All had a 4 week/1 month clinical placement and all were in intensive care medicine. Only 2 of these 7 studies evaluated student performance using simulation and/or OSCE stations (6, 26) . The remaining 5 studies evaluated student knowledge using MCQs or written assessment tools (19, 27, (33) (34) (35) . Of the five studies with a primary research focus on student learning, contextual learning theory was used as the theoretical basis for one study (20) . No other study made any methodological references to an underlying educational theory. Though seldom articulated, Miller's Pyramid of learning outcomes was an important theoretical foundation in most of the studies (17) . Nine studies (47.4% evaluated learning outcomes in the "shows how" level 3 domain using simulation Inter-rater reliability High reliability scores Sim enables identification of "discrepancies between expected and actual educational outcomes" (p1694) Miller's "Shows how" Frontiers in Medicine | www.frontiersin.org (21, 23, 25, 26, 30, 31) and/or OSCEs (6, 22, 26, 36) . Ten studies had learning outcomes in the "knows" or "knows how" domains, using a combination of MCQs, SAQs, essay questions or online case studies (19, 20, 22, 23, 26, 27, 29, 32, 34, 35) . One study did not describe the assessment tool used (33) . Finally, two studies evaluated student performance in the workplace using subjective observation (24) and multi-source feedback (28) , thereby targeting Miller level 4 "does" learning outcomes. No study used workplace assessment tools (WPAs) to assess undergraduate learning outcomes. Our scoping review illustrates the heterogeneity of literature around assessment in undergraduate anesthesia and intensive care medicine. Published studies used numerous assessment instruments targeting learning outcomes that were either knowledge-based (using selected-response MCQs and constructed-response written tests) or performance-based (using OSCEs or a simulated clinical environment). The findings of the review also attest to the meaningful undergraduate learning that can occur in these clinical settings. Though heterogenous, a majority of the included studies used evidence-based assessment strategies insofar as either the chosen tools have strong evidence supporting their use in UGME (MCQs, written exams, simulation, OSCEs) or more than one method was used to inform assessment decisions (8, 9, 37) . Furthermore, all except one study (33) used an assessment strategy appropriate to the learning outcomes mapped to Miller's pyramid. A key objective of undergraduate medical education is to equip students with the competencies to deliver effective and safe patient care in their first year of medical practice and beyond. The assessment of knowledge, or the theoretical application of that knowledge alone may not be sufficient to judge whether students are equipped with those skills (38) . Accordingly, 9 studies in our review adopted a competency-based approach and used OSCEs or simulation to evaluate student performance. Only 3 of these studies however used complementary tools to evaluate learning in the domains of knowledge and understanding as well as competence (22, 23, 26) . These are the most informative studies in our review for educators making instructional design decisions about student assessment after anesthesia and ICM placements. The most frequent tool used to evaluate student performance in our review was simulation, whereby learners were assessed in the "show how" learning domain. Performance in a simulated environment however correlated poorly with written assessments, suggesting a role for both tools in reaching a more complete assessment decision about a student's learning outcomes. There was scant evidence in the studies however of observed performance assessment in the workplace-the "does" domain of Miller's pyramid-which likely reflects a lack of active student work in ICUs and anesthetic rooms; students in these environments are more likely to learn by observing than by doing. Nonetheless, recent trends in undergraduate assessment have led to greater emphasis on workplace performance and to the use of entrustable professional activities (EPAs) (39) (40) (41) . EPAs entail assessors observing students performing "units of work" (42) (p2) thereby judging the level of supervision each student needs with that activity-the entrustment decision (11) . They are mapped to learning curricula and are commonly informed by assessment in the workplace (11) . To date, most EPA studies in UGME centre around internal medicine and general surgery. While we identified no studies using EPAs solely for the purposes of undergraduate anesthesia/ICM assessment, some include anesthesia or ICM placements as part of a broad learning curriculum (43) (44) (45) . Furthermore, of the 13 undergraduate EPAs published by the Association of American Medical Colleges, 3 (e.g., recognize a patient requiring urgent or emergent care and initiate evaluation and management) have direct relevance to ICM and anesthesia (46) . The use of EPAs also helps address long-standing concerns about graduating student's readiness to commence internship (47) . A strong case can therefore be made for applying EPAs to anesthesia and ICM. We did not identify any studies using a programmatic approach to assessment, though our literature search may not have found studies which included anesthesia or ICM as part of a broader programme-wide assessment strategy. Moreover, programmatic assessment challenges the "modulespecific" nature of traditional UGME, viewing a clinical placement within a broader context of an overall curriculum (12) . Therefore, it does not readily apply to our review, which ab initio focused on the assessment of a specific placement in anesthesia or ICM. A common criticism of education research is that theoretical considerations are not brought to the fore. This also applies to the majority of articles in our review. Notwithstanding this, most of the included studies were designed in such a way that the use of a theoretical paradigm could be implied. Moreover, some of the studies used instructional design methodology. The primary use of technology to enhance learning in 3 studies (27, 29, 32) draws from eLearning theory (48) . Adopting problembased learning in 3 further studies acknowledged the importance of constructivism in effective education (6, 26, 34) . Aspects of workplace learning theory were evident in the numerous studies that promoted learning within the operating theater and/or the intensive care unit (6, 20, 22, 23, 26, 29, 30, 33, 34, 36) . The 6 studies using simulation for assessment (21, 23, 25, 26, 30, 31) likely reflected the importance of experiential learning theory and reflection (49) . A recent development that is difficult to ignore is the impact of the COVID-19 pandemic on undergraduate education. Accounts of formal assessment in the context of students working or learning during the pandemic appear to be rare (19) . This likely reflects the time constraints and staff redeployment during the pandemic when clinical activities took precedence over faculty pursuits (50) . Paradoxically, when viewed through the lens of Miller's pyramid, at a time when students may have been more actively involved in clinical care-in "doing" work-they were least likely to be formally assessed. Our study therefore highlights a gap between published research in anesthesia/ICM assessment and recent advances in undergraduate assessment. However, for practice to change in these disciplines, the approach to student education in anesthetic rooms and ICUs must first evolve to allow more active student participation in daily clinical activities. Observed workplace assessment can only occur in environments where learners play a legitimate role in patient care. This is the main research gap identified in our review and is an important area for future research into the undergraduate study of anesthesia and intensive care medicine. Our study results may be limited by the omission of some relevant articles. Each author however individually performed the search and results were then combined. Studies were individually evaluated for inclusion and though Cohen's coefficient showed a suboptimal correlation, author discussion at each step of study selection addressed any differences in opinion. We included any study where discussion did not resolve perceived differences. To improve the rigor of our study, we used the guidelines published by Maggio et al. in all stages of our review (16) . We consulted with key external stakeholders but this step yielded no additional information. In conclusion, our findings yield three useful insights. First, they act as a practice guide for educators directly involved in the design, delivery, and assessment of undergraduate learning in anesthesia and intensive care medicine. Second, they are informative for university educators tasked with the general organization and design of undergraduate medical education, helping them position anesthesia and intensive care medicine in strategies around programmatic assessment and workplace-based entrustable decision-making. Finally, they identify a large research gap for future studies to focus upon. EO'C and ED were equally involved in all aspects of this study including design, literature searches, data collection, and manuscript writing. Both authors contributed to the article and approved the submitted version. Survey of the current status of teaching intensive care medicine in Australia and New Zealand medical schools Development of an undergraduate medical education critical care content outline utilizing the delphi method Undergraduate education in anaesthesia, intensive care, pain, and perioperative medicine: the development of a national curriculum framework A model for programmatic assessment fit for purpose Twelve tips for teaching in the ICU Vertical integration of basic science in final year of medical education Medical students can learn the basic application, analytic, evaluative, and psychomotor skills of critical care medicine Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 conference Ottawa 2020 consensus statement for programmatic assessment -1. agreement on the principles Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide No. 99 Theoretical considerations on programmatic assessment Scoping studies: advancing the methodology Systematic review or scoping review? guidance for authors when choosing between a systematic or scoping review approach Scoping studies: towards a methodological framework Scoping reviews in medical education: a scoping review The assessment of clinical skills/competence/performance Predictive validity evidence for medical education research study quality instrument scores: quality of submissions to JGIM's medical education special issue Developing the eMedical Student (eMS)-a pilot project integrating medical students into the tele-ICU during the COVID-19 pandemic and beyond An innovative course in surgical critical care for second-year medical students Anaesthesiology students' non-technical skills: development and evaluation of a behavioural marker system for students (AS-NTS) Assessment of current undergraduate anaesthesia course in a Saudi University Evaluation of medical students' performance using the anaesthesia simulator The lack of construct validity when assessing clinical clerks during their anesthesia rotations High-fidelity patient simulation: validation of performance checklists Quantifying learning in medical students during a critical care medicine elective: a comparison of three evaluation instruments The use of computerized Learning in intensive care: an evaluation of a new teaching program The use of multi-source feedback in assessing undergraduate students in a general surgery/anesthesiology clerkship Web-based formative assessment case studies: role in a final year medicine two-week anaesthesia course Validity and reliability of undergraduate performance assessments in an anesthesia simulator Identification of gaps in the achievement of undergraduate anesthesia educational objectives using high-fidelity patient simulation Evidence of virtual patients as a facilitative learning tool on an anesthesia course Implementation of a formal medical intensive care unit (MICU) curriculum for medical students Teaching medical students complex cognitive skills in the intensive care unit Integrated critical care curriculum for the third-year internal medicine clerkship An adaptation of the objective structured clinical examination to a final year medical student course in anaesthesia and intensive care Simulation-based assessments in health professional education: a systematic review education guide No 25: the assessment of learning outcomes for the competent and reflective physician Entrustable professional activities in health care education: a scoping review Scoping review of entrustable professional activities in undergraduate medical education Working with entrustable professional activities in clinical education in undergraduate medical education: a scoping review Introducing an assessment tool based on a full set of end-of-training EPAs to capture the workplace performance of final-year medical students Novice students navigating the clinical environment in an early medical clerkship An elective entrustable professional activity-based thematic final medical school year: an appreciative inquiry study among students, graduates, and supervisors Toward defining the foundation of the MD degree: core entrustable professional activities for entering residency Medical students' preparedness for professional activities in early clerkships Elements of a science of e-learning Lessons learned: contribution to healthcare by medical students during COVID-19 The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.