REV_ISS_WEB_ACEM_12231_20-10 1055..1061 EDUCATIONAL ADVANCE Emergency Medicine Residents’ Self- assessments Play a Critical Role When Receiving Feedback Richard Bounds, MD, Colleen Bush, MD, Amish Aghera, MD, Nestor Rodriguez, MD, R. Brent Stansfield, PhD, and Sally A. Santen, MD, PhD (the MERC at CORD Feedback Study Group) Abstract Objectives: Emergency medicine (EM) faculty often aim to improve resident performance by enhancing the quality and delivery of feedback. The acceptance and integration of external feedback is influenced by multiple factors. However, it is interpreted through the “lens” of the learner’s own self-assessment. Ideally, following an educational activity with feedback, a learner should be able to generate and act upon specific learning goals to improve performance. Examining the source of generated learning goals, whether from one’s self-assessment or from external feedback, might shed light on the factors that lead to improvement and guide educational initiatives. Using a standard oral board scenario, the objective of this study was to determine the effects that residents’ self-assessment and specific feedback from faculty have on not only the generation of learning goals but also the execution of these goals for performance improvement. Methods: In this cross-sectional educational study at four academic programs, 72 senior EM residents participated in a standardized oral board scenario. Following the scenario, residents completed a self-assessment form. Next, examiners used a standardized checklist to provide both positive and negative feedback. Subsequently, residents were asked to generate “SMART” learning goals (specific, measurable, attainable, realistic, and time-bound). The investigators categorized the learning goals as stemming from the residents’ self-assessments, feedback, or both. Within 4 weeks, the residents were asked to recall their learning goals and describe any actions taken to achieve those goals. These were grouped into similar categories. Descriptive statistics were used to summarize the data. Results: A total of 226 learning goals were initially generated (mean � SD = 3.1 � 1.3 per resident). Forty-seven percent of the learning goals were generated by the residents’ self-assessments only, while 27% were generated by the feedback given alone. Residents who performed poorly on the case incorporated feedback more often than high performers when generating learning goals. Follow-up data collection showed that 62 residents recalled 89 learning goals, of which 52 were acted upon. On follow- up, the numbers of learning goals from self-assessment and feedback were equal (25% each, 13 of 52), while the greatest number of reportedly executed learning goals came from self-assessments and feedback in agreement (40%). Conclusions: Following feedback on an oral board scenario, residents generated the majority of their learning goals from their own self-assessments. Conversely, at the follow-up period, they recalled an increased number of learning goals stemming from feedback, while the largest proportion of learning goals acted upon stemmed from both feedback and self-assessments in From the Departments of Emergency Medicine, Christiana Care Health System (RB), Newark, DE; Michigan State University (CB), East Lansing, MI; Maimonides Medical Center (AA), New York, NY; the University of Wisconsin School of Medicine and Public Health (BR), Madison, WI; and the University of Michigan Medical School (RBS, SAS), Ann Arbor, MI. Received February 25, 2013; revision received May 10, 2013; accepted May 17, 2013. Presented at the American College of Emergency Physicians Scientific Assembly Research Forum, Denver, CO, October 2012; and the Society for Academic Emergency Medicine Annual Meeting Innovations in Emergency Medical Education, Chicago, IL, May 2012. The authors have no conflicts of interest or financial support to disclose. The study was initiated as a collaborative mentored mul- ticenter project through the Medical Education Research Certificate (MERC) program, a joint faculty development program through the Association of American Medical Colleges and the Council of Emergency Medicine Residency Directors. Supervising Editor: Lalena Yarris, MD. Address for correspondence and reprints: Richard Bounds, MD; e-mail: richbounds@gmail.com. doi: 10.1111/acem.12231 PII ISSN 1069-6563583 1055 © 2013 by the Society for Academic Emergency Medicine ISSN 1069-6563 1055 agreement. This suggests that educators need to incorporate residents’ self-assessments into any delivered feedback to have the greatest influence on future learning goals and actions taken to improve performance. ACADEMIC EMERGENCY MEDICINE 2013; 20:1055–1061 © 2013 by the Society for Academic Emergency Medicine E ducators in the medical field often struggle with the efficient delivery of valuable feedback that reliably motivates learners to improve their per- formance. Previous research has shown that effective formative feedback comes from a credible source and is focused on the task as opposed to the individual.1–4 Feedback delivery in medical education is challenged by numerous factors, including a lack of training for faculty and the need to protect the self-esteem of our learners and preserve a positive working relationship.2 Many educators measure the value of feedback by looking at the learner’s improvement process and satis- faction with the feedback rather than actual results and changes in behavior.1,5 The generation of specific goals by the learner serves as a powerful method for shifting the focus from the process of improvement, toward actual results and desired outcomes.6–9 Although learn- ing goals should be created through reflection, it is criti- cal that learners incorporate feedback from evaluators into the process, and evidence suggests that learners struggle with the interaction between self-assessment and feedback.6,10 Unfortunately, negative feedback may be consciously or unconsciously rejected and thereby less likely to be incorporated into the generation of learning goals.2,11 In one study, learners’ perceptions of their own abilities were more likely to result in the generation of learning goals than was the actual feedback.7 These findings are somewhat troubling given the literature suggesting that physicians are unable to accurately self-assess.7 In fact, the least skilled and most overconfident physicians who would benefit the most from constructive feedback have shown the worst accuracy in self-assessment.12 This concern was supported in a recent study also demon- strating that many low performers did not generate learning goals that were concordant with their areas of weakness.6 To improve our residents’ performance, we need to further explore how feedback and self-assess- ment are each incorporated into the generation of learning goals and how the two interact in the mind of the learner.6–8 This study evaluated the source of learning goals by emergency medicine (EM) residents after participating in a standard oral board examination, performing their own self-assessments, and receiving specific feedback on their performance. The objective of this study was to investigate the contributions of self-assessments and external feedback and how the two interact, in the for- mation of learning goals, as well as the reported follow- through on those goals, for performance improvement. We expected self-assessments to play a significant role in the formation of learning goals, with faculty feedback playing a greater role, especially when the two perspec- tives contradicted one another. We also determined whether other factors affected the generation of learn- ing goals, such as quality of feedback provided, the high or low performance of the resident, and the correlation between faculty assessments and the learners’ self- assessments. METHODS Study Design This was a multicenter observational, cross-sectional, educational intervention study using an oral board sce- nario as a basis for self-assessment, feedback, and development of learning goals. This study was reviewed and approved by the local institutional review board at each of the four sites. Outcomes were deidentified and kept confidential. All participants signed written consent. Study Setting and Population This study was conducted at four EM residency programs, led by one investigator at each site. All post- graduate year (PGY)-2 and above residents who were available on the designated days of study enrollment were offered the opportunity to participate in the oral board case, and 72 volunteered. Interns (PGY-1 resi- dents) were excluded from the study due to their limited experience with the oral board format. Thirty residents were in PGY-2 (42%), and 39 residents were in PGY-3 (54%). Three residents from one site were in PGY-4 or -5 as part of a dual training program (EM and inter- nal medicine). Study Protocol The four investigators have primary teaching appoint- ments and administered the oral board scenario and feedback. The investigators worked together to develop the study protocol during a national certification pro- gram for researchers in medical education. As a tool for the study of the interaction between feedback and self- assessment, a single case oral board scenario was taken from the Council of Emergency Medicine Residency Directors oral board case bank.13 By consensus, a case of cardiac arrest due to ventricular fibrillation was selected and modified by the study group to include all six Accreditation Council for Graduate Medical Educa- tion core competencies and incorporate certain skills that would challenge residents at all levels of training (see Data Supplement S1, available as supporting infor- mation in the online version of this paper). The investi- gators felt that all PGY-2 and above EM residents should demonstrate competency in advanced cardiac life support (ACLS) protocols and resuscitation, as well as communication with a cardiologist and a patient’s family. More advanced aspects of the case, such as 1056 Bounds et al. • EM RESIDENT FEEDBACK recognition of QT interval prolongation and an under- standing of the pathophysiology of digoxin toxicity in the setting of hypokalemia, were added to challenge the senior residents. For the primary outcome of learning goals generated by the resident, we used the “SMART” framework as a guide: specific, measurable, attainable, realistic, and time-bound. The application of SMART learning goals has been employed successfully in business and gen- eral education for many years, and the literature regarding learning goals in medical education supports its utility.6,8,9 The study group created a structured feedback form consisting of both a nationally validated quantitative scoring system and a novel qualitative feedback check- list (see Data Supplement S2, available as supporting information in the online version of this paper). To stan- dardize the feedback across investigators, specific posi- tive and negative feedback phrases for each critical action of the case were developed by group consensus. Positive feedback included phrases such as “Accu-check performed promptly” and “recognized prolonged QT on EKG,” while negative feedback included points such as “does not obtain confirmatory CXR after intubation” and “does not speak to patient’s wife after patient is sta- bilized.” The investigators agreed to strictly use these scripted positive and negative phrases in their feedback delivery, and those points verbally delivered were docu- mented under “things done well” and “points for improvement.” In addition, the American Board of Emergency Medicine (ABEM) oral board assessment form was used to generate a quantitative score across eight separate domains. These domains included skills such as data acquisition, problem solving, and interper- sonal relations, and each was scored using a scale of 1 to 8, with 8 being the highest. Prior to study initiation, the protocol was pilot-tested by the investigators with two recent graduates from each of the sites. Following each pilot test, the subject was shown the critical actions and feedback checklist, assessment forms and learning goals were discussed openly, and his or her input was used to modify the case and the feedback checklist to develop response process and internal structure validity evidence. The five categories for the subsequent learning goals (discussed below) were developed based on these pilot data. The four investigators administered the examination at their institutions with their own EM residents. Inves- tigators guided each resident through the standardized protocol individually. First, each resident participated in the oral board case scenario, with the investigator as the examiner. For self-assessment, after the case, each resident scored his or her own performance using the ABEM oral board evaluation form, then had 5 minutes to note specific strengths and weaknesses in his or her performance (see Data Supplement S3, available as sup- porting information in the online version of this paper). Of note, the resident was not given access to the list of critical actions or the feedback checklist, so that the self-assessments could be generated based solely on reflection. The play of the case scenario might have pro- vided some real-time feedback, however, as the patient improved when critical actions were met, but decom- pensated when errors were made or critical steps were not taken. While the resident completed the self-assess- ment form, the examiner completed the feedback check- list and ABEM scoring form (Data Supplement S2). Forms were not shared, and the resident was asked to avoid discussing the self-assessment with the examiner. The examiner then verbally provided two to four spe- cific positive feedback phrases from the “things done well” section of the form and two to four “points for improvement.” This number from two to four for each was chosen by group consensus based on experience, as well as the coursework on medical education research, revealing that overwhelming the learner with “too much” feedback proves counterproductive. Follow- ing the self-assessment and feedback delivery, the resi- dent was asked to generate SMART learning goals in writing based on the entire experience (see Data Sup- plement S4, available as supporting information in the online version of this paper). Clear definitions and examples of SMART goals, adapted from a study by Chang et al.,6 were provided and each resident was asked to read them prior to listing his or her learning goals (Table 1). Last, the resident was asked to rate the effectiveness of the feedback received from the exam- iner using a five-question feedback rating form adapted from a study by Eva et al.7 on the generation of learn- ing goals. An eight-point Likert rating scale from “worst” to “best” was used for each question, and the fifth question on “overall quality” was used for the sta- tistical analysis (see Data Supplement S5, available as supporting information in the online version of this paper). Immediately following data collection at each site, the investigators reviewed and categorized the raw data. The feedback checklists and assessment forms were uploaded onto a cloud platform and at least three inves- tigators reviewed the data by conference call. The team reviewed the forms for completeness, came to consensus on interpreting the source of the residents’ learning goals, and categorized them as stemming from the self- assessment or the examiner’s feedback. Separate catego- ries were created for those learning goals that came from both the self-assessment and the feedback in agreement, as well as categories for one in disagreement Table 1 SMART Learning Goals and Examples “SMART” Learning Goals Examples Specific “I will be able to clearly hear systolic murmurs in adult and pediatric patients.” Measurable “I will improve my in-training exam score by 10% over the next year.” Achievable “I will read two to four articles per month on important medical topics.” Realistic “I will overcome my hesitancy to discuss my differential diagnosis on rounds.” Time-bound “I will improve 50% in 8 weeks and achieve my goal by May” ACADEMIC EMERGENCY MEDICINE • October 2013, Vol. 20, No. 10 • www.aemj.org 1057 with the other. In addition, two separate groups were created for the purposes of data analysis: total associ- ated with self-assessment and total associated with feed- back. The former group included three categories: self- assessment only, self-assessment and feedback in agree- ment, and self-assessment in disagreement with the feedback. The latter group was the sum of learning goals from feedback only, from self-assessment and feedback in agreement, and from feedback in disagreement with self-assessment. For follow-up, within 2 to 4 weeks of the oral board case, the residents were given a form and asked to recall their learning goals and describe any actions taken toward achieving those goals. For example, if the resident wrote the learning goal, “I will review the ACLS protocols for unstable tachycardia,” did she remember that goal on follow-up, and did she actually study the ACLS protocols in the past couple of weeks? The learning goals recalled and reportedly acted upon at follow-up were categorized in a similar fashion to the initially generated learning goals so that the initial and follow-up data could be compared. Data Analysis Descriptive statistics (frequency tables, 95% confidence intervals) were used to summarize the data. To deter- mine the summary self-assessment ratings and faculty ratings of the oral board case, the scores from each domain (data acquisition, problem solving, patient man- agement, etc.) from the ABEM forms were summed. To address concerns of violation of normality and homo- scedasticity, the error variance of the model did not differ significantly from normal (Shapiro-Wilk W = 0.97, p = 0.09), and the model did not show signs of hetero- scedasticity (Breusch-Pagan test = 2.00, p = 0.6). Linear regression compared the quantitative self-assessment and faculty assessment scores. We estimated each resi- dent’s likelihood of incorporating feedback into his or her learning goals using residual maximal likelihood (REML).14 These estimates were log-likelihood ratios where a likelihood ratio of 0.9 meant that a resident was 90% as likely as the average resident to use feed- back in generating learning goals. We ran Pearson’s correlations between perceived quality of feedback, self- assessment, faculty assessment ratings, and the likeli- hood of using faculty feedback for the generation of learning goals. Finally, quality of feedback, self-assess- ment, and faculty ratings were entered into a regression with likelihood of incorporating feedback into learning goals as the dependent variable. RESULTS Source of Learning Goals Of the residents offered an opportunity to participate in the study at the four sites, 96% (n = 72) volunteered. The 72 enrolled subjects generated a total of 226 learning goals (mean � SD = 3.1 � 1.3 per resident), which were categorized by the investigators according to the source, whether from self-assessments or feedback (Table 2). The majority of learning goals were associated with the residents’ own self-assessments (73%). Surprisingly, fewer than half of the learning goals were generated based on faculty feedback. Residents almost never incor- porated feedback that was in disagreement with their own self-assessments; however, they sometimes gener- ated learning goals based on self-assessments that con- tradicted the feedback provided by the examiners (4%). The Relationship Between Faculty Assessment and Resident Self-assessment Scores There was some agreement between faculty scores and resident self-scoring, as one might expect given the standardized domains of the ABEM oral board assess- ment form. Linear regression of self-assessment by fac- ulty assessment reached statistical significance, although the correlation was weak (r = 0.28, p < 0.05). Factors Affecting the Generation of Learning Goals A minority of our residents incorporated feedback into their learning goals. We analyzed different factors in an attempt to determine which residents were more or less likely to integrate external feedback. First, we sought to determine whether residents use feedback based on perceived quality. We compared the residents’ ratings of feedback quality to their likelihood of using the feed- back and found no significant relationship (r = 0.06, p = 0.6). This suggests that learners are not more likely to use feedback that they deem of higher quality. Another secondary hypothesis was that residents who score themselves highly (i.e., are more self-confi- dent) would be less likely to integrate external feedback. However, a relationship between self-assessment scores and incorporation of feedback was also not significant (r = 0.01, p = 0.9). Next, we compared the faculty rating (as opposed to self-assessment rating) to the incorporation of feed- back. Did those residents who were high performers (by faculty ratings) use feedback less often? Here, we found a negative relationship between faculty ratings and the use of feedback (r = –0.35, p < 0.005). This sug- gests that high performing residents were less likely to incorporate feedback, and poor performers tended to use feedback for incorporation into their learning goals. Table 2 Sources of Learning Goals Goal Category No. (%) by goal (n = 226) No. (%) by resident* (n = 72) Self-assessment only 106 (47) 55 (76) Feedback only 60 (27) 38 (53) Self-assessment and feedback, in agreement 48 (21) 31 (43) Feedback, in disagreement with self-assessment 2 (1) 2 (3) Self-assessment, in disagreement with feedback 10 (4) 9 (12) Total associated with self-assessment 164 (73) 66 (92) Total associated with feedback 110 (49) 55 (76) *Each resident documented multiple learning goals. 1058 Bounds et al. • EM RESIDENT FEEDBACK Because the correlation between faculty rating and feedback utilization was the only significant relationship found, we controlled for self-assessment scores and quality of feedback ratings in case these might serve as confounders. All three variables were analyzed in a lin- ear regression model to generate a likelihood ratio of 0.79 (Table 3). In other words, a 10-point higher score by faculty rating (out of 64 points total, as eight domains were scored 1 to 8 each) makes the resident 21% less likely to integrate feedback into a particular learning goal. Follow-up on Learning Goals and Actions Taken Following a period of 2 to 4 weeks, subjects were asked which learning goals they were able to recall from the case, if any, and what actions they had taken to improve performance. Seventy-two residents initially generated a total of 226 learning goals. At 2 to 4 weeks, 62 of the initial 72 residents responded to the follow-up question- naire, and this group recalled a total of 89 learning goals (mean = 1.4 learning goals per resident). Of those, 58% (52 of 89) were reportedly acted upon (mean =0.8 learning goals per resident). The sources of learning goals recalled and acted upon are summarized in Table 4. Although the origins of immediate learning goals were heavily weighted toward self-assessments, there was a shift toward feedback on the follow-up recall and actions taken. Feedback that agreed with self-assess- ments led to the greatest number of actions taken to improve future performance. DISCUSSION We conducted a multicenter observational cross-sec- tional study of EM residents taking an oral board exam- ination, performing a structured self-assessment, and receiving feedback from evaluators. We were surprised to find that the initial learning goals generated by the residents based on the experience were more strongly influenced by their own self-assessments than by faculty feedback. The follow-up actions taken, on the other hand, more often integrated faculty feedback as long as it agreed with the residents’ self-assessments. One’s self-assessment for a given task is influenced by multiple factors, including prior experience, confi- dence, and the context of the activity.7,15 Studies have shown that physicians’ self-assessments share very little association with external measures of objective perfor- mance.1,12,16 Our study’s findings agreed, in that we found a very weak association between faculty scores and resident self-assessment scores. This suggests that residents whom faculty rated poorly tended to overesti- mate their performance while those rated highly tended to underestimate their performance. Given the inaccu- racy of self-assessments, some authors have called into question the use of self-assessment tools in medical edu- cation and their value to performance improvement and patient care.15 On the contrary, we found that self- assessments are integral to the residents’ goals and plans to improve and thus have value that requires greater recognition. Our study aimed to separate self-assessments and faculty feedback to determine the roles each of these Table 3 Linear Regression Model* Variables (Intercept) Likelihood Ratios Estimate Standard Error T-value p-value Self-assessment 1.01 0.01 0.86 0.39 0.39 Quality of feedback 1.06 0.06 0.09 0.70 0.49 Faculty assessment 0.79 –0.02 0.01 –3.23 <0.01 *Adjusted R2 = 0.10. Table 4 Sources of Learning Goals Recalled and Acted Upon on Follow-up Questionnaires Goal Category Goals Recalled Residents Recalling Goals Goals Executed Residents Executing Goals N for column 89 62 52 62 Self-assessment only 30 (34) 23 (37) 13 (25) 13 (21) Feedback only 22 (25) 17 (27) 13 (25) 12 (19) Self-assessment and feedback, in agreement 31 (35) 23 (37) 21 (40) 16 (26) Feedback, in disagreement with self-assessment 3 (3) 3 (5) 3 (6) 3 (5) Self-assessment, in disagreement with feedback 3 (3) 2 (3) 2 (4) 2 (3) Total associated with self-assessment 64 (72) 43 (69) 36 (69) 30 (48) Total associated with feedback 56 (63) 38 (61) 37 (71) 27 (44) Data are reported as n (%). ACADEMIC EMERGENCY MEDICINE • October 2013, Vol. 20, No. 10 • www.aemj.org 1059 play, and how they interact, in the development of learning goals and in the reported execution of those goals. We found that the vast majority of learning goals were generated from the residents’ own self-assess- ments, while fewer than half of the initial learning goals incorporated faculty feedback. On the other hand, on subsequent follow-up, the actions taken to improve showed that faculty feedback had a greater influence than initially measured. The total learning goals report- edly executed showed an equal influence between self- assessments and feedback, and the strongest stimulus came from agreement between feedback and self- assessments. Despite the known inaccuracy of self- assessments in physician evaluation, it is clear that we must at least consider these self-assessments when pro- viding feedback if we hope to influence performance. Our results support the findings of prior studies, which have shown that the feedback evaluators provide is always interpreted through the “lens” of the residents’ self-assessments.15 Our results indicate that the learning goals created by residents are based more on their self-assessments, while their actual behaviors and actions integrate exter- nal feedback, as long as it agrees with their self-assess- ments. It is possible that self-assessments play a greater role in motivating self-directed learning, such as learn- ing goals generation, while feedback plays a greater role in changing actual behaviors. Although not conclu- sive, our results are hypothesis-generating and might lead to subsequent studies where a similar protocol is used with detailed recording of actual actions taken and repeat testing to objectively assess for improvement. In our analysis to determine which residents incorpo- rate our feedback into their learning goals, we found no significant relationship with quality of feedback rating or self-assessment scores. We did find an inverse rela- tionship between faculty scoring and feedback integra- tion, indicating that higher performers were less likely to use feedback from evaluators. The etiology of this is unknown. This could be explained by the underlying self-confidence of those high performers or, conversely, the greater interest in feedback on the part of lower performers. Alternatively, the specific feedback pro- vided to higher performers may have been less action- able and perceived as less relevant. Meanwhile, lower performers may have received feedback that was more critical to management. This is an area that requires further research. Given these findings, educators might consider inte- grating more self-assessments into the various training modalities of the residency curriculum. We found that agreement between self-assessments and feedback led to the most actions reportedly taken to improve. Per- haps once the learner’s self-assessment is communi- cated to the evaluator, the feedback can then be modified to make acceptance and integration more likely. Feedback could be carefully molded into the framework of the learner’s own conclusions: positive points might be reinforced, while constructive feedback could be focused on the specific task to preserve one’s self-esteem. Feedback integration is an area that is ripe for further research and exploration in many areas of resident education and even clinical care. LIMITATIONS We chose an oral board examination as the tool for measurement of the variables of interest; this may not be generalizable to other contexts. It is possible that EM residents had difficulty generating SMART learn- ing goals, despite being provided with a clear frame- work and written examples. However, one of the new EM practice-based learning and improvement mile- stones specifically requires residents to “implement learning plans.”17 EM residents may require training in the development of learning goals and learning plans. Regarding our data analysis, the categorization of residents’ learning goals according to the source was somewhat subjective. We minimized this by using clear definitions of categories and by ensuring that a group of at least three investigators simultaneously reviewed the self-assessments, feedback checklists, and learning goals and reached consensus agreement in all cases. For each site’s data analysis, the investigator who served as the examiner was on the conference call participating in the categorization; this may have introduced some bias, but we found this necessary, at times, to interpret what the resident wrote on the form and place the responses into context given the investigator’s knowledge of the encounter. For the fol- low-up data, 62 of the original 72 subjects completed the questionnaire, with 10 not participating, mostly due to off-service or away rotations; this may have introduced bias. The actions taken were self-reported by the residents, and there was no mechanism to ver- ify the completion of those actions in this study proto- col. Perhaps a future study that retests the subjects with another oral board case could more objectively assess for performance improvement. Last, the multi- center nature of this study, while strengthening the validity of our findings, did introduce a potential for variability in style of examination administration and delivery of feedback between examiners. This variabil- ity was minimized by the group’s creation of a struc- tured feedback checklist. CONCLUSIONS As teachers in graduate medical education, we too often focus on the quality and delivery of feedback. A grow- ing body of literature is demonstrating the need to focus more on the receiving end—the learners’ recep- tion and integration of our feedback. While this study found that the majority of initial learning goals gener- ated stemmed from the residents’ own self-assessments, most of the actions taken to improve after a follow-up period were from feedback and self-assessment in agreement. In addition, higher performers were less likely to use evaluator feedback than lower performers. These findings support the evidence that we, as educa- tors, need to gain an understanding of how residents assess their own performance, so that feedback might be modified and delivered in a way that is interpretable in the context of the residents’ self-assessments. Although these self-assessments may sometimes be inaccurate, they lay the foundation upon which to deliver effective feedback, and alignment of the two 1060 Bounds et al. • EM RESIDENT FEEDBACK perspectives demonstrates the greatest effect in motivat- ing actions to improve performance. The authors acknowledge Jeff Love, MD, for leadership of the MERC program and ongoing mentorship; Peter Shearer, MD, and Christopher McDowell, MD, for assistance in concept development and protocol design; and Barbara Davis, RN, for data management support. References 1. Shute VJ. Focus on formative feedback. Rev Educ Res. 2008; 78:153–89. 2. Archer JC. State of the science in health profes- sional education: effective feedback. Med Educ. 2010; 44:101–8. 3. Gigante J, Dell M, Sharkey A. Getting beyond “good job”: how to give effective feedback. Pediatrics. 2011; 127:205–7. 4. Veloski J, Boex JR, Grasberger MJ, Evans A, Wolf- son DB. Systematic review of the literature on assessment, feedback and physicians’ clinical perfor- mance: BEME Guide No. 7. Med Teach. 2006; 28:117–28. 5. Yarris LM, Fu R, LaMantia J, et al. Effect of an edu- cational intervention on faculty and resident satis- faction with real-time feedback in the emergency department. Acad Emerg Med. 2011; 18:504–12. 6. Chang A, Chou CL, Teherani A, Hauer KE. Clinical skills-related learning goals of senior medical stu- dents after performance feedback. Med Educ. 2011; 45:878–85. 7. Eva KW, Munoz J, Hanson MD, Walsh A, Wakefield J. Which factors, personal or external, most influ- ence students’ generation of learning goals? Acad Med. 2010; 85(10 Suppl):S102–5. 8. Grant H, Dweck CS. Clarifying achievement goals and their impact. J Pers Soc Psychol. 2003; 85:541–53. 9. O’Neill J, Cozemius A. The Power of SMART Goals: Using Goals to Improve Student Learning. Bloom- ington, IN: Solution Tree, 2005:13–26. 10. Sargeant J, Mann K, Vleuten C, Metsemakers J. “Directed” self-assessment: practice and feedback within a social context. J Contin Educ Health. 2008; 28:47–54. 11. Sargeant J, Armson H, Chesluk B, et al. The pro- cesses and dimensions of informed self-assessment. Acad Med. 2010; 85:1212–20. 12. Davis DA, Mazmanian PE, Fordis M, Harrison RV, Thorpe KE, Perrier L. Accuracy of physician self- assessment compared with observed measures of competence. JAMA. 2006; 296:1094–102. 13. Hinfey P, Bohm M. Ventricular Fibrillation Cardiac Arrest. CORD Sharepoint Site. Available at: http:// cord.sharepointsite.net/default.aspx. Accessed August 18, 2011 (access by members only). 14. Gelman A, Hill J. Data Analysis Using Regression and Multilevel/Hierarchical Models. New York, NY: Cambridge University Press, 2007. 15. Eva KW, Armson H, Holmboe E, et al. Factors influ- encing the responsiveness to feedback: on the inter- play between fear, confidence, and reasoning processes. Adv Health Sci Educ Theory Pract. 2012; 17:15–26. 16. Eva KW, Regehr G. “I’ll never play professional football” and other fallacies of self-assessment. J Contin Educ Health. 2008; 28:14–19. 17. Accreditation Council for Graduate Medical Educa- tion, American Board of Emergency Medicine (ACGME). The Emergency Medicine Milestone Pro- ject. Accreditation System Recent News. Available at: http://www.acgme.org/acgmeweb/. Accessed Jul 31, 2013. Supporting Information The following supporting information is available in the online version of this paper: Data Supplement S1. Ventricular fibrillation cardiac arrest. Data Supplement S2. Post-test evaluator assessment. Data Supplement S3. Post-test self-assessment. Data Supplement S4. Learning goals. Data Supplement S5. Feedback rating form. ACADEMIC EMERGENCY MEDICINE • October 2013, Vol. 20, No. 10 • www.aemj.org 1061