key: cord-1025952-koxbwoj1 authors: Lee, Ryan K.; Cohen, Micah; David, Neena; Matalon, Terence title: Transitioning to Peer Learning: Lessons Learned date: 2020-10-20 journal: J Am Coll Radiol DOI: 10.1016/j.jacr.2020.09.058 sha: 23e26daa157b37032ac481dfdfcdbfeb073e7fcf doc_id: 1025952 cord_uid: koxbwoj1 PURPOSE: To describe the transition from a traditional peer review process to the peer learning system as well as the issues that arose and subsequent actions taken. METHODS: Baseline peer review data were obtained over 1 year from our traditional peer review system and compared with data obtained over 1 year of using peer learning. Data included number of discrepancies and breakdown of types of discrepancies. Staff radiologists were surveyed to assess their perception of the transition. RESULTS: There were 5 significant discrepancies submitted under the traditional peer review system and 416 cases submitted under the new peer learning methodology. The most reported peer learning events were perception (45.0 %) and great calls (35.1%). Surveys administered after the intervention period demonstrated that most radiologists felt peer learning contributed more to their professional development and had more opportunities for learning compared with the traditional peer review system. CONCLUSION: The benefits of instituting peer learning include increased radiologist engagement and education. There may be challenges in the transition from a traditional peer review system to peer learning; however, the process of solving these issues can also result in an overall improved system. Peer review has always been a challenging proposition in radiology, with methodologies used in other specialties often not directly applicable to our field. In 2002, a patient safety task force of the ACR introduced RADPEER to allow radiologists to provide quality assessment and routine peer review [1] . This now familiar form of peer review involves a score-based, retrospective assessment of prior interpretations of imaging studies. Although this traditional peer review process has been widely used to satisfy credentialing bodies such as the Joint Commission, its use in producing meaningful learning opportunities has been questioned [2] . Furthermore, the traditional peer review process focuses on mistakes of an individual rather than on the shared learning opportunities for the entire team. In 2015, the Institute of Medicine (now the National Academy of Medicine) published Improving Diagnosis in Health Care, which emphasized the importance of applying methods to identify and learn from diagnostic errors, with a fundamental shift in focus from one of blame to one of education [3] . The newer concept of peer learning described by Larson et al [4] introduced a process of peer review that follows the tenets outlined by the National Academy of Medicine in identifying diagnostic errors with an emphasis of education and improvement. Since its introduction, peer learning has demonstrated improved radiologist satisfaction and educational value particularly when compared with the traditional peer review process [4] [5] [6] . Because of the significant differences compared with traditional peer review, the transition to peer learning requires substantial changes in mindset, workflow, and logistics. These factors presented challenges that threatened to disrupt its implementation. The purpose of this article is to describe our transition to peer learning, issues we encountered, and adjustments that were subsequently made. This HIPAA-compliant project obtained a waiver from our institutional review board. Peer review data were collected on imaging studies from a network that included an academic tertiary care center, two community hospitals, and multiple outpatient sites. Each peer review was placed electronically in Conserus software (Change Healthcare, Nashville, Tennessee) built into the PACS (Synapse, Fuji Medical Systems USA, Stamford, Connecticut). The baseline peer review data included reviews from January 1, 2018, to December 31, 2018. The intervention period included the peer learning review data from January 1, 2019, to December 31, 2019. Surveys were sent to participating radiologists to assess their opinions on the peer learning system compared with the baseline peer review system after the conclusion of the intervention period. The existing departmental peer review process ( Fig. 1 ) required each radiologist to randomly select cases to review on a biannual basis. The rating system used was based on the RADPEER (ACR, Reston, Virginia) grading system: I. Agree with read II. Discrepancy in interpretation, not ordinarily expected to be made a. Not clinically significant b. Clinically significant III. Discrepancy in interpretation, should be made most of the time a. Not clinically significant b. Clinically significant At the end of each biannual review period, a report was generated that included all peer-reviewed cases given a score of IIb or higher, and these cases were then reviewed by the Peer Review Committee (PRC). The PRC was comprised of the chair of the PRC, chair of radiology, and the vice chair of quality and safety. The case was considered a discrepancy if the PRC agreed, and only then was a notification sent back to the original reading radiologist. The shift to peer learning resulted in significant technical and logistical changes from the baseline peer review system. Changes in the interface in the PACS system included the development of a new classification scale based on peer learning concepts and allowed for more granular information to be captured for each peer review. In addition, the cases from the new peer learning system served as the basis for learning conferences including the periodic morbidity and mortality (M&M) Q4 meetings, a change from the previous ad hoc system of obtaining cases for M&M. By directly pairing the peer learning system with M&M conferences, a formalized method of obtaining cases to review with the entire department including residents was established. Furthermore, strict adherence to anonymization of each case presented was enforced to emphasize the educational aspect of these conferences and to prevent shaming of involved radiologists. 2 However, the most important difference in making the shift to peer learning was conceptual. The central theme in the adoption of peer learning is embedded in its name: learning. The most important purpose for identifying discrepancies under the peer learning format is to improve the practice of radiology through education. This is a critical distinction compared with our baseline peer review system in which the focus was primarily compliance, with education a decidedly secondary objective. Early Implementation of Peer Learning. The implementation of peer learning resulted in a new rating system, with discontinuation of the system that was based on RADPEER (Fig. 2) . New peer learning categories were created based on a similar system described by Larson et al [4] and incorporated into the Conserus software embedded in the PACS as follows: When a learning opportunity was selected, an additional box prompted the radiologist to choose either "clinically significant" or "not clinically significant." The category "great call" was created to capture exceptional identification of subtle findings or instances of impressive cognition to acknowledge that learning opportunities do not always derive from instances of error. Other recorded data for each case included the reviewer and reviewee radiologists involved and a brief description of the teaching point. Although a minimum number of two reviews were required each month, to promote the importance of education in this process, radiologists were encouraged to place reviews whenever they saw a discrepancy. Given the changes of the peer learning process and the anticipation of unforeseen issues with the new workflow, a relatively low number of required reviews was selected to minimize radiologist anxiety during the transition. To give peer learning the best chance of success, it was felt initially that not overburdening the radiologists with an untested workflow was paramount. Similar to the workflow for the baseline peer review process, the reviewed cases were collected and those labeled "clinically significant" were sent to the PRC to be vetted. Given the priority of education under the peer learning system, the PRC was expanded to include at least one representative from each subspecialty to improve validation of cases. The PRC's responsibilities were also expanded. In addition to vetting the submitted peer-reviewed cases, it was tasked to select cases to be formally presented at M&M conference. Furthermore, these M&M conferences were held quarterly and represented cases from all subspecialties, replacing the previous system of semiannual M&M conferences, which were centered on a specific subspecialty. The change from focusing on one subspecialty to including all subspecialties in each conference was made initially because it was felt that reviewing M&M cases from only one 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 subspecialty resulted in long gaps between presentation of other subspecialty cases. Although this was mitigated to some degree by the increasing frequency of conferences with the new approach, the overall feedback was that presenting a cross section of different types of cases within each conference resulted in a more rounded educational experience, and so this format was ultimately retained. Additionally, this prevented any potential long latency period before a case with more profound repercussions would be discussed. Current Implementation of Peer Learning. The current iteration of our peer learning implementation involved primarily changes in workflow and logistics (Fig. 3) . Although the peer learning categories were retained without changes, the subcategories "clinically significant" and "not clinically significant" were eliminated as it was felt this determination should be assessed by the PRC and furthermore simplified workflow for the radiologists. The other major change involved how the peer-reviewed cases submitted by the radiologists were to be vetted. Peer learning cases were sorted into each subspecialty, and each section head was tasked to review those cases with its division to decide which cases should be submitted to the PRC. The vetting process was primarily performed at the subspecialty level, and the PRC acted as the body overseeing this process. Because there continued to be a radiologist from each subspecialty on the PRC, any interesting cases to report or issues uncovered during the subspecialty reviews deemed noteworthy could be brought up and discussed during the committee meetings. The total number of reported significant peer learning events increased with the new peer learning process compared with the baseline peer review system. In a 1-year period, only five significant discrepancies were identified (defined as IIb or higher) with the traditional peer review system. In contrast, 416 total discrepancies were identified with the new peer learning system. These 416 total discrepancies consisted of 45.0% perception errors, 35.1% great calls, 8.4% cognition errors, 6.7% reporting errors, and 0.5% communication errors (Table 1 ). This represented an average of 16.64 peer learning events per radiologist in the intervention period. A total of 22 of 22 participating diagnostic radiologists responded to the survey (Fig. 4) . Of those 22, 16 (73%) radiologists either agreed or strongly agreed with the statement that the peer learning system contributed more to their development as a radiologist compared with the traditional peer review system, with 5 radiologists who were neutral, and 1 radiologist who disagreed. Of the 22, 9 (41%) radiologists either agreed or strongly agreed with the statement that the baseline peer review process was easier to use than the new peer learning process, with 7 radiologists who were neutral, 5 radiologists who disagreed, and 1 radiologist who strongly disagreed. Of the 22, 17 (77%) radiologists either agreed or strongly agreed with the statement that peer learning provided more opportunity for learning compared with the traditional peer review system, with 5 radiologists who were neutral. The adoption of the peer learning methodology resulted in dramatically increased engagement and participation by radiologists as evidenced by the significantly increased number of peer learning events (416) in the intervention period (Table 1) , compared with only 5 total significant reviews in the baseline peer review period. The peer learning system in generating many opportunities for learning dramatically increased the number of interesting cases presented at M&M conferences, in turn requiring an increase in frequency of M&M conferences that doubled under the new system. These new M&M conferences were extremely well received with each conference including a print & web 4C=FPO 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 cross section of interesting cases from each subspecialty generating stimulating discussions. This is likely the basis for the 77% of radiologists who felt the new peer learning process contributed to increased educational opportunities compared with the baseline peer review system (Fig. 4) . The typical M&M conference reviewed 8 to 10 cases per subspecialty and frequently went over the 1 hour allotted slot of the conference. Nevertheless, despite the success of peer learning in improving radiologist engagement and creating an educational environment, the transition to peer learning presented significant challenges with physician acceptance, compliance, informatics, and workflow. The survey results demonstrate that staff radiologists felt the peer learning system is superior from an educational perspective compared with our baseline peer review process. Despite this, introducing the new system did result in some concerns. The new peer learning grading system resulted in complaints initially by several radiologists accustomed to the traditional grading system. Sweeping changes introduced into a long-standing workflow can potentially trigger such reactions, and as such this was not unexpected. Furthermore, the software interface was more complex than the baseline peer review interface. This can be seen with the significant number of radiologists (41%) who found the placing of peer learning reviews to be more complicated compared with the baseline peer review system (Fig. 4) . This was in part due to the increasing granularity of information being recorded. For example, a new check box indicating whether the case was a trauma case or not was added. Another field was created to identify if the peer learning event was placed on a staff radiologist or radiology resident. Over time, this interface was gradually simplified and streamlined. Fields in the interface were scrutinized to determine if the information was truly needed or if the same information could be obtained elsewhere. For example, the field that indicated whether the review was on a staff radiologist or resident was eliminated as this information could be obtained elsewhere in the interface. As described previously, the box requiring indication of either "clinically significant" or "not clinically significant" was removed. Over time, these small changes together resulted in a smoother and more refined interface. A more serious concern from several radiologists was how the information from the peer learning process would be used. The motivation behind peer learning is to improve the educational worth of the peer review process, and to this end, the switch was enthusiastically supported by the radiologists. At the same time, however, there was anxiety that these data could also be used to assess radiologist performance. In theory, the baseline peer review process was designed for the purpose of assessing radiologist performance, as error rates for significant discrepancies were routinely generated. In reality, the small numbers of significant discrepancies generated demonstrated that this process was more to document compliance, and its utility in assessing true performance was poor. The new peer learning system in generating significantly more discrepancies resulted in angst that the data could more robustly be used for performance review. This fear was mitigated by the department chair emphasizing that the new peer learning process was meant to improve the educational value of peer review and would not be used in generating error rates for performance review. Ultimately, buy-in of senior leadership was critical to the acceptance of peer learning as a legitimate educational tool. This highlighting of education as opposed to performance review assuaged radiologist apprehension and paved the way for the engagement that ultimately resulted. The expansion of the PRC to include radiologists from each subspecialty as well as the expanded role of radiologists in their own subsections increased transparency, underscored educational importance, and contributed to radiologist acceptance. Our baseline peer review system satisfies credentialing bodies such as the Joint Commission requirements for ongoing professional practice evaluation (OPPE) and is one of the reasons for its widespread use among radiology departments. Calculating error rates using the numerical grading system on the surface fulfills a quantitative method of assessing radiologist performance but in reality is of questionable value. The peer learning methodology by emphasizing education and encouraging placement of a review whenever a discrepancy was found is by definition a nonrandom process, and as such the data cannot be used in the same fashion as data obtained from our baseline peer review process. Furthermore, it was decided that this peer learning data would not be used in direct performance reviews of radiologists. These factors together at first glance seem to be problematic when using the peer learning format with respect to OPPE Joint Commission requirements. However, upon closer inspection of the Joint Commission documents [7] , we believe that it is in fact possible to satisfy OPPE requirements using the peer learning methodology. The Joint Commission describes an outline for designing a process for OPPE but leaves considerable latitude in the details of that implementation. In our implementation of the peer learning system, the responsibilities for data review and frequency of review required by the Joint Commission are satisfied by the radiologist reviews and workflow through the PRC. The Joint Commission further subdivides the types of data that can be collected into qualitative and quantitative types. Our peer learning process satisfies the qualitative 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 assessment by documenting participation in the process, including documentation of the minimal number of reviews required each month as well as attendance at M&M conferences. One weakness of the peer learning methodology is the lack of quantitative data that our baseline peer review system previously incorporated. However, it is notable that the Joint Commission allows for the use of either qualitative or quantitative data (or both) to satisfy OPPE requirements, provided it satisfies the appropriate committees at the institution [7] . The peer learning process we have implemented satisfies the qualitative data requirements and furthermore has acceptance by the PRC and department leadership and thus fulfills Joint Commission requirements. If so desired, separate quantitative data can also be collected, such as final report turnaround time for each radiologist. Nevertheless, each department should review their process to ensure it complies with regulatory and institutional requirements. A significant limiting factor in implementing the peer learning methodology in our department was the availability of IT support, exacerbated in the later phase of implementation because of the coronavirus disease 2019 pandemic. The initial implementation of peer learning resulted in increased number of mouse clicks compared with what was previously needed using the baseline peer review system. Experimentation with different iterations of the software interface to improve efficiency required significant backend coding changes by an already overworked IT team and often resulted in long periods before modifications were incorporated. This was further complicated by having other review workflows in the system including those for radiology resident discrepancies. Although we have made significant strides in streamlining these different workflows, much remains to be done to improve integration and optimize efficiency of the peer review process. Generating feedback to the reviewer and reviewee has been an unexpected issue in using our peer learning system. With our baseline peer review system, generating feedback to the original reading radiologist was not an issue, because due to the infrequent significant discrepancies placed, notification could be given manually. With the new peer learning workflow resulting in many reviews being placed, this manual feedback loop closure was no longer practical. As a result, the IT team is working on a workflow that will automatically send both reviewer and reviewee a notification once a review has been vetted. Another weakness in our implementation of the peer learning methodology is the lack of anonymization of the reviewer, reviewee, and patient. By nature of how a review is placed in the system, the reviewer can discern the identity of the reviewee and vice versa. This in turn can influence whether a review is placed or not, although this is mitigated by making education and not compliance the focus of this process. One future goal is to implement better anonymization between reviewer and reviewee, while keeping this information available to the PRC. As an aside, the absence of anonymization of our review process should not invalidate protection of these peer reviews from discovery such as is granted by the Peer Review Protection Act in Pennsylvania (PRPA) [8] . Although a discussion regarding what is protected under this legislation is beyond the scope of this article, because these reviews are generated by and for physicians and reviewed by a hospital committee for the purposes of improving quality, our process follows the legislative intent for protection under the PRPA. Nevertheless, recent controversial decisions by the Pennsylvania Superior Court have potentially narrowed the scope of protection of the PRPA [8] , and it behooves each department to review their peer review process with their legal department. The success of our implementation of peer learning to improve radiologist engagement was evident given the increase in significant peer reviews compared with our baseline peer review process. Despite this dramatic increase, the average number of events placed per radiologist at 16.64 was below the threshold originally set at 24 (2 reviews per month). This was attributed primarily to the various changes in workflow as it was being optimized, undoubtedly at times resulting in confusion and frustration in placing reviews, which in turn resulted in fewer reviews being placed. As the workflow is continually improved, we expect the number of peer reviews will continue to increase in the future. Nevertheless, during the initial iteration of our peer learning workflow, the increase in reviews overwhelmed the PRC. At one point, the committee had to meet weekly to vet the reviews in a timely fashion, leaving little time to address other committee responsibilities. This issue was addressed by modifying the workflow as described previously, in which each subspecialty reviewed cases in their discipline and subsequently forwarded their findings to the PRC. In addition to improving the workflow of the PRC, this modification had the benefit of increasing radiologist engagement, because all radiologists in theory would be part of a group review process within their section. In practice, the complexities of scheduling make it challenging to have radiologists in a section to all be available at the same time, and there is still opportunity to improve this process. 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 In summary, the transition from our baseline peer review process to peer learning presented many different challenges. Issues arose from physician anxiety, regulatory concerns, informatics difficulties, and workflow struggles. Each of these issues threatened to derail and possibly abort the implementation of peer learning. We tackled these challenges by accounting for the viewpoints of each stakeholder: radiologist both as a reviewer and reviewee, department leadership, informatics team, and regulators. Considering these varied viewpoints was key in our initial transition to the peer learning system as well as its subsequent optimization to its current form. As is the case for any quality project, the current iteration is not meant to be final but part of a process that will continue to evolve as we continue to learn more. Although there are still issues to resolve, we consider our implementation of peer learning to date a success because it has increased radiologist engagement, improved education, and streamlined workflow. -Challenges can arise when transitioning from a traditional peer review process to the peer learning; however, these challenges can also represent opportunities for improvement. -Challenges from transitioning to peer learning include physician anxiety, regulatory concerns, informatics difficulties, and workflow struggles. -Solutions must involve consideration for the radiologist (both as a reviewer and reviewee), department leadership, informatics team, and regulators. -Peer learning increased radiologist engagement and improved education. RADPEER scoring white paper Yield of learning opportunities from a radiology random peer review program Board on Health Care Services, Institute of Medicine. Improving diagnosis in health care Peer feedback, learning, and improvement: answering the call of the Institute of medicine report on diagnostic error Implementation of a peer learning program replacing score-based peer review in a multispecialty integrated practice Improving radiology peer learning: comparing a novel electronic peer learning tool and a traditional score-based peer review system Ongoing professional practice evaluation (OPPE)-understanding the requirements PRPA"): looking back, looking ahead. The Pennsylvania Bar Association Quarterly