key: cord-1045456-qfj2z0su authors: Engberg, Morten; Bonde, Jan; Sigurdsson, Sigurdur T.; Møller, Kirsten; Nayahangan, Leizl J.; Berntsen, Marianne; Eschen, Camilla T.; Haase, Nicolai; Bache, Søren; Konge, Lars; Russell, Lene title: Training non‐intensivist doctors to work with COVID‐19 patients in intensive care units date: 2021-03-03 journal: Acta Anaesthesiol Scand DOI: 10.1111/aas.13789 sha: add37ef8648e2d8250f01864ca0a305cc9618c30 doc_id: 1045456 cord_uid: qfj2z0su BACKGROUND: Due to an expected surge of COVID‐19 patients in need of mechanical ventilation, the intensive care capacity was doubled at Rigshospitalet, Copenhagen, in March 2020. This resulted in an urgent need for doctors with competence in working with critically ill COVID‐19 patients. A training course and a theoretical test for non‐intensivist doctors were developed. The aims of this study were to gather validity evidence for the theoretical test and explore the effects of the course. METHODS: The 1‐day course was comprised of theoretical sessions and hands‐on training in ventilator use, hemodynamic monitoring, vascular access, and use of personal protective equipment. Validity evidence was gathered for the test by comparing answers from novices and experts in intensive care. Doctors who participated in the course completed the test before (pretest), after (posttest), and again within 8 weeks following the course (retention test). RESULTS: Fifty‐four non‐intensivist doctors from 15 different specialties with a wide range in clinical experience level completed the course. The test consisted of 23 questions and demonstrated a credible pass–fail standard at 16 points. Mean pretest score was 11.9 (SD 3.0), mean posttest score 20.6 (1.8), and mean retention test score 17.4 (2.2). All doctors passed the posttest. CONCLUSION: Non‐intensivist doctors, irrespective of experience level, can acquire relevant knowledge for working in the ICU through a focused 1‐day evidence‐based course. This knowledge was largely retained as shown by a multiple‐choice test supported by validity evidence. The test is available in appendix and online. At Rigshospitalet (Copenhagen University Hospital), we opened an entirely new ICU unit with 60 beds dedicated to treatment of COVID-19 patients. The new unit, which was named COVITA, resulted in an increase to 120 ICU beds overall at Rigshospitalet, thereby doubling the capacity. As a consequence of the sudden ICU expansion, existing ICU medical staff resources were insufficient, resulting in an urgent need for doctors trained to work in COVITA. The widespread cancellation of elective surgery and outpatient appointments due to the pandemic meant that doctors from a wide range of specialties and experience levels were available. However, because the knowledge and clinical skills necessary in the ICU differ from those needed in other specialties, we undertook the task to quickly organize a course to train non-intensivist doctors to care for COVITA patients. The framework for the course was based on an extensive educational needs assessment study among doctors and nurses in Wuhan, who were working with COVID-19 patients at the peak of the epidemic in early 2020. The aims of the needs assessment, which we performed in collaboration with doctors at Sun Yat-sen University, Guangzhou, China, were to identify theoretical and practical as- Based on the results from this collaboration, we developed a 1-day ICU training course for non-intensivist doctors, which comprised of both theoretical and hands-on sessions. To ensure that the set course aims were met, and doctors had the required knowledge after the course, evaluation of the course effects using objective assessment with a test was crucial. Importantly, such a test should be validated to ensure that it measured the intended competence. 3, 4 The aims of this study were to develop and assess the validity of a theoretical test of knowledge in intensive care for COVID-19 patients and to explore the short-and long-term effects of a fast-track course specifically developed to train experienced non-intensivist doctors in intensive care. We hypothesized that doctors with clinical experience from other hospital specialist areas would be ready to assist in the ICU after a focused 1-day course and that the effects of such a course, given in the context of an ongoing pandemic, would be long-lasting. The correct answers were defined based on the local application of the international guidelines for management of critically ill adults with COVID-19 and best practice in intensive care. 6, 7 We investigated the validity of the MCQ test using the contemporary framework developed by Messick. 3 The test was administered to two groups: (a) Doctors currently working in an ICU who were either consultants in intensive care or who have had at least 2 years of postgraduate clinical ICU experience ("Experts"); (b) Danish medical students in their last year of medical school ("Novices"). the previously mentioned needs assessment and aimed to prepare non-intensivist doctors both theoretically and practically to treat COVID-19 patients in the ICU (Table 1 ). The course program consisted of two theoretical sessions and four hands-on simulationbased sessions ( Table 1). The material for the theoretical sessions was prepared by a group of intensive care physicians (SS, NH, SB) Course participants took the test immediately before ("pretest") and immediately after ("posttest") the sessions of the day. Six weeks after the course, all participants received email invitations and subsequent reminders to retake the test as a follow-up within 2 weeks ("retention-test") ( Table 1 ). The pre-and posttests were completed supervised in a printed format in the classroom. The follow-up tests were completed unsupervised at the participants' discretion using the online version of the MCQ test. Test validation: Item analysis was performed on the 25 multiple- The mean test scores of the two groups were compared using independent samples t-test to check the test's discriminatory ability. The consistency of the two groups was compared using Levene's test for variances. A pass/fail standard was defined using the contrasting groups' method. 9 Test scores between groups were compared using independent samples t-test and score changes using one-sample t-test. The effect of pretest on posttest scores was calculated in a univariate linear regression model. All analyses were performed using IBM TA B L E 1 Content of the one-day fast-track course For the validation of the test, 37 experienced intensivists ("experts") were invited to participate; they all completed the test. One hundred and thirty-five final-year medical students from the four medical schools in Denmark had completed the test when enrolment was closed. To balance the data for the statistical analysis, only the first 74 consecutive answers were enrolled in the study. Item analysis based on all answers (n = 111) revealed two questions with an item discrimination index <0.1, which were discarded. Difficulty indices were calculated for the remaining 23 questions; none was found too easy or too difficult (ie, Level IV) ( Table 2) . Therefore, the final test consisted of 23 questions. Comparison of the experts' and novices' scores (maximum 23 points) showed that the experts scored better than novices (mean 19.6 (SD 1.8) versus mean 9.5 (SD 3.2); P < .001 (Figure 1 ), demonstrating a strong relation to experience and a lower variance in score among the experts (P = .003). A credible pass/fail standard was established at 16 points ( Figure 1 ). Only two novices achieved this score (3% false positives), whereas one experienced failed (3% false negatives). Appendix A and is available online on https://www.flexi quiz.com/ SC/N/COVID -19MCQ . F I G U R E 1 Establishing a pass/fail-standard using the contrasting groups' method. Comparison of the experts' and novices' scores in the final test (maximum 23 points), showed that the experts scored significantly better than novices (mean 19.6 (SD 1.8) vs. mean 9.5, (SD 3.2); P < .001, demonstrating a strong relation to experience. A credible pass/fail standard was established at 16 points. 9 Only two novices achieved this score (3% false positives), whereas one experienced failed (3% false negatives). requests to volunteer were primarily distributed through the heads of departments, but many contacted us directly. Result of pre-and posttest: All 54 doctors performed the pre-and posttest. The mean pretest score was 11.9 (SD 3.0), which was higher than the final-year medical students (P < .001; 95% CI We chose the MCQ format for assessment since it is easily scalable and requires little time and few resources. The test can be completed in approximately 10 minutes, and by using an online version, the test score will be readily available. Therefore, it is also easily repeatable and could be used for repetition and re-certification purposes with minimal costs. A weakness of the MCQ format is the rigid dichotomous scoring of the questions that makes it very easy to score the test but does not allow for qualified elaborate answers. Most importantly though, we should keep in mind that the MCQ format only tests specific knowledge and not clinical experience, intuition, leadership, or procedural skills; important qualifications which are likely to correlate with seniority. 10 We tested knowledge retention 6-8 weeks after the course. short time period after learning. 11 Anticipating a flattening of the knowledge decline before 6 weeks, our finding of a score decline of 15% is low. 11 The use of tests on the course day could contribute to this, but also the doctors' anticipation of clinical duty, which could have motivated self-directed repetition, for example, using the guidelines 6 distributed on the course day. Furthermore, by studying retention at 6-8 weeks, we have undoubtedly enhanced its duration further, since the repeated test itself functions as a formalized spaced repetition of the curriculum. 12 In fact, spaced repetitions tests may be used in a structured manner to maintain knowledge, thereby ensuring preparedness also for the next epidemic wave. When discussing education in relation to a viral outbreak, it is outcome. However, since physicians' self-evaluated competence does not correlate with their actual skills, objective outcome assessment is critical when evaluating educational efforts. 17 Use of assessment motivates the learners and supports long-term knowledge retention. [18] [19] [20] [21] This study demonstrates that it is possible to develop a curriculum and a test supported by validity evidence despite a time-limited situation. In general, and most certainly during a pandemic, requirements for training should be based on educational needs and local conditions. 22 Simulation-based training, which has well-documented positive effects, is an efficient way of providing training while protecting trainees and patients from unnecessary harm. 23, 24 Especially stress-free training in use of personal protective equipment is beneficial, [25] [26] [27] and was suggested by the majority in our needs assessment from Wuhan. However, simulation-based training is resource demanding, and if local training programs are not in place, e-learning programs could be a cost-effective option. Still, e-learning is most efficient when combined with other educational modalities. 28 The Individual results of pre-, post-and retention-tests. Post-test scores were positively correlated with pre-test scores, but the effect size was small: beta = 0.16 (P < .05), corresponding to an average of 1 additional point in the post-test for every 6.3 additional points in the pre-test. required urgent preparations, 31 including management of critically ill patients 32 and correct use of personal protective equipment. 33 However, as the last epidemics thankfully faded out, focus went elsewhere. As a result, when the COVID-19 pandemic struck, our hospital, just as many others, had to start from scratch to develop training programs. The lesson learned is that education and training for crises should not solely be performed during an ongoing viral epidemic but also during "peacetime" in order to be well prepared for the next viral epidemic outbreak. Given the second wave of infections, 34 this reinforces the need to reflect on the key lessons from the initial wave and thereby ensure that we have relevant training curricula in place to prepare for future infectious threats. 35 Importantly, this study did not measure actual clinical performance. Although 54 doctors completed the course and tests, the sample size is insufficient to explore differences between different specialties. Risk of recruitment bias exists since participants in the novice group (part A) as well as the course participants volunteered upon advertisement and participants in the expert group (part A) were invited personally. Most course participants were doctors at Rigshospitalet, Copenhagen, and the external validity of these findings has not been explored. The novice group were final-year medical students, not qualified doctors. They had, however, completed all mandatory courses in intensive care medicine. Recruitment was done through social media groups, making it impossible to check the accuracy of the information required. However, manual validation of the dataset did not reveal obvious "false" entries. The test was not administered in completely the same way for all groups. This was due to clinical duties and the risk of infection. The novice group tests (part A) and the retention tests (part C) were administered using an online version. The experts' tests (part A) and the course participants' pre-and posttest were done in a printed format. To minimize the risk for use of references when taking the online test, an automatic timer for each question was set to 60 seconds. Similarly, the supervised printed tests were all completed in less than 15 minutes. In conclusion, we have no reason to suspect that data are systematically biased by "cheating." In this study, we developed a focused 1-day evidence-based course for non-intensivist doctors in caring for critically ill COVID-19 patients. Using a newly developed test supported by validity evidence, we found that doctors acquired relevant knowledge to work in the ICU and that knowledge was largely retained. We would like to express our gratitude to all the doctors from other specialties who participated in our courses and for a brief time worked at COVITA with us, as well as the immense support we received from the Heads of Departments at Rigshospitalet, who helped us recruit staff for COVITA. Correct answers highlighted in bold. Please note that these reflect local practice and best available knowledge at the time of the course and should be reviewed before use. a. The ventilator has no effect on expiration. b. The ventilator actively sucks air out during expiration. c. The expiration is passive, but it is influenced by the ventilator settings. d. The expiration is passive, but the ventilator adjusts so that the volume of sequential inspiration and expiration is the same. 6. What is the treatment goal for the blood hemoglobin level, Hb, 14. The pressure transducer for the arterial line has been placed below the bed. How will this affect the blood pressure values on the monitor? a. Not at all, the transducer height does not matter b. The mean arterial pressure is artificially higher than the actual patient pressure. c. The mean arterial pressure is artificially lower than the actual patient pressure. d. The systolic blood pressure will be elevated, but the diastolic blood pressure will be lowered. b. Cardiac output increases due to increased oxygenation. c. Cardiac output decreases due to increased intra-thoracic pressure. d. Cardiac output increases due to increased intra-thoracic pressure. 17. Which ventilation setting should normally also be changed when b. The tidal volume should be increased (eg, to 9 mL/kg). Available from: www. sst.dk/en/Engli sh/Coron a-eng/Statu s-of-the-epide mic/COVID -19-updat es-Stati stics -and-charts Hospital surge capacity in a tertiary emergency referral centre during the COVID-19 outbreak in Italy Current concepts in validity and reliability for psychometric instruments: Theory and application Assessment in health professions education Constructing Written Test Questions for the Basic and Clinical Sciences. 4th edn. National Board of Medical Examiners Surviving Sepsis Campaign: Guidelines on the management of critically ill adults with Coronavirus Disease 2019 (COVID-19) Surviving sepsis campaign: international guidelines for Management of Sepsis and Septic Shock Developing and validating multiple-choice test items Contrasting groups' standard setting for consequences analysis in validity studies: reporting considerations Clinical intuition and clinical analysis: expertise and the cognitive continuum Long-term retention of basic science knowledge: A review study Spaced education improves the retention of clinical knowledge by medical students: A randomised controlled trial The severe acute respiratory syndrome Swine flu Middle East respiratory syndrome WHO Ebola Response Team. Ebola virus disease in West Africa -The first 9 months of the epidemic and forward projections Accuracy of physician self-assessment compared with observed measures of competence The critical role of retrieval practice in long-term retention The effect of testing versus restudy on retention: a meta-analytic review of the testing effect The testing effect on skills learning might last 6 months Test-enhanced learning in health professions education: A systematic review: BEME Guide No. 48 The role of deliberate practice in the acquisition of expert performance Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence Simulation in healthcare education: A best evidence practical guide. AMEE Guide No. 82 Preparing and responding to 2019 novel coronavirus with simulation and technology-enhanced learning for healthcare professionals: Challenges and opportunities in China A randomized trial of instructor-led training versus video lesson in Training Health Care Providers in Proper Donning and Doffing of Personal Protective Equipment Understanding workflow and personal protective equipment challenges across different healthcare personnel roles Internet-based learning in the health professions: a meta-analysis Online training as a weapon to fight the new coronavirus Available from: www.esicm.org/ covid -19-skill s-prepa ratio n-cours e Pandemic preparedness and response -lessons from the H1N1 influenza of 2009 Intensive care management of coronavirus disease 2019 (COVID-19): challenges and recommendations The Art of War" in the Era of Coronavirus Disease 2019 (COVID-19) Beware of the second wave of COVID-19 Curriculum development for medical education: A six-step approach