key: cord-1022878-i1s5o5py authors: Bass, Michael; Oncken, Christian; McIntyre, Allison W.; Dasilva, Chris; Spuhl, Joshua; Rothrock, Nan E. title: Implementing an Application Programming Interface for PROMIS Measures at Three Medical Centers date: 2021-10-20 journal: Appl Clin Inform DOI: 10.1055/s-0041-1736464 sha: b21a63d85bfec0ac7d9e71efb46bc0fb8a303b14 doc_id: 1022878 cord_uid: i1s5o5py Background There is an increasing body of literature advocating for the collection of patient-reported outcomes (PROs) in clinical care. Unfortunately, there are many barriers to integrating PRO measures, particularly computer adaptive tests (CATs), within electronic health records (EHRs), thereby limiting access to advances in PRO measures in clinical care settings. Objective To address this obstacle, we created and evaluated a software integration of an Application Programming Interface (API) service for administering and scoring Patient-Reported Outcomes Measurement Information System (PROMIS) measures with the EHR system. Methods We created a RESTful API and evaluated the technical feasibility and impact on clinical workflow at three academic medical centers. Results Collaborative teams (i.e., clinical, information technology [IT] and administrative staff) performed these integration efforts addressing issues such as software integration as well as impact on clinical workflow. All centers considered their implementation successful based on the high rate of completed PROMIS assessments (between January 2016 and January 2021) and minimal workflow disruptions. Conclusion These case studies demonstrate not only the feasibility but also the pathway for the integration of PROMIS CATs into the EHR and routine clinical care. All sites utilized diverse teams with support and commitment from institutional leadership, initial implementation in a single clinic, a process for monitoring and optimization, and use of custom software to minimize staff burden and error. Electronic health record systems (EHRs) were designed exclusively for clinician-entered data, thus relegating patient perspective data (e.g., fatigue self-report) to be entered as ambiguous, non-standardized clinical notes. 1 As clinical care aims to support symptom management, improve patients' quality of life, enhance patient-clinician communication, 2 facilitate shared decision-making, and screen for distress, 3, 4 patient-reported outcomes (PROs) are considered the "gold standard" approach in capturing the patient perspective, 5 whereas some early adopters have used electronic systems for clinical PRO data collection; for a variety of reasons the collection of PROs has been outside of EHRs, thus limiting PROs' integration in routine clinical care. 6, 7 One set of PROs, the Patient-Reported Outcome Measurement Information System (PROMIS) 8 measures, offers advantages when collected in clinical care. First, the measures are appropriate for use in patients with a wide range of chronic diseases and demographic characteristics. This enables individuals with multiple conditions to complete fewer PROs and for clinicians to compare scores across patients. Second, PROMIS measures use T-scores, a standardized score (mean ¼ 50, SD ¼ 10) centered to a reference population, thus making it more interpretable than traditional PRO measures. For most PROMIS measures, T ¼ 50 is the mean for the U.S. general population. Third, in addition to fixed length measures, PROMIS measures include computer adaptive tests (CATs). CATs present questions using an item response theory (IRT)-based algorithm that dynamically selects questions based on the answers previously provided. 9 This model has the distinct advantage of reducing patient burden and lessens the likelihood of respondent fatigue because the assessment finishes once the algorithm estimates a score within a specified standard error. CT algorithms are Bayesian in nature and not based on conditional branching logic that is utilized in many data collection systems. Instead, CT item selection and scoring uses statistical properties of items and response options. Very few survey systems implement these algorithms and even fewer systems are developed with interoperability in mind. To date, only one EHR vendor has fully integrated the PROMIS CT algorithm into their system. Even then, not all PROMIS CATs are available and organizations must request the EHR vendor to load CATs into the system for them. This process takes time and is not in the control of the organization requesting the measure(s). Additionally, organizations are constrained by the features available through the EHR vendor such as limited visualization options. 10, 11 Although EHR integration of CT administration and scoring algorithms within the EHR are theoretically possible, large EHR vendors have competing priorities when considering new features and scheduling releases. To integrate PROMIS CATs in EHRs, a cloud-based software as a service 12 could be considered, however, this would not be acceptable in most clinical settings because of HIPAA privacy and security requirements. Protected health information would be collected outside the clinical system's administra-tive, physical, and technical safeguards and then transmitted through their firewall to their EHR, thus requiring the establishment of business associate agreements. A lightweight REpresentational State Transfer (REST) 13 Application Programming Interface (API)-based component would overcome these technical and business obstacles because the API could be hosted within the firewall of the clinical organization and communicate directly with the EHR system. We developed the Assessment Center API (Assessment Center-API) as a RESTful webservice to make the PROMIS CT administration and scoring accessible in other software applications without the need for significant technical expertise and effort and evaluated its implementation at three academic medical centers. We created the Assessment Center-API 14 with four distinct functions to optimize PRO data collection in a clinical workflow. 15, 16 The Assessment Center-API (1) lists all available PRO measures. Clinicians can select appropriate measures to administer to their patient population and analysts can map PROMIS measures to questionnaires within the EHR. This mapping enables clinicians to place orders and view results directly within the EHR system. To support this, the Assessment Center-API (2) generates an assessment based on the clinical order placed. This can be done by defined variables within the EHR (e.g., diagnostic code) or on an ad hoc basis by the clinician. After an order is placed, an assessment is created in the Assessment Center-API. Then, the collection platform can (3) administer the assessment to the patient at home or in-clinic on a tablet at the point of care. For CATs, the API returns the most informative item, based on an IRT algorithm, and the patient responds. This process of requesting and responding to items continues until the IRT algorithm determines that enough information has been obtained and a sufficiently precise score estimate can be calculated. At this point, the assessment is finished and the Assessment Center-API (4) produces the assessment results and sends them to the EHR to be viewed by care providers. All functions are implemented as RESTful end points that use basic authentication. The system is anonymous in that no patient identifiers are used during the assessment; the system generates a globally unique token for each assessment and the token is used as the key for in-memory state management on the server. No data are written to disk during the assessment, only results are stored in a database with the associated unique token. The calling system is responsible for any linking with clinical data during visual presentations to clinicians and patients. We implemented the Assessment Center-API at a single clinic at three different academic medical centers. Each site assembled collaborative teams including clinical, information technology (IT), and administrative staff to integrate the Assessment Center-API with their EHR for a single clinic. Teams evaluated the technical feasibility and impact on clinical workflow of the implementation. The University of Rochester Medical Center (URMC) is an academic medical center with approximately 1,400 full-time faculty and over 2 million outpatient visits per year. URMC has a history of collecting PROs in research and individual practices collected measures relevant to their medical specialty largely with paper forms or REDCap. 17 PRO data was consequently inconsistent across disciplines and accessed only for retrospective analysis. The URMC Health Laboratory, a group focused on health care innovation, set out to redesign how PROs were collected and used clinically. The goal was to utilize PROs throughout a clinical episode to affect the course of treatment. Therefore, PROs had to be available for all patients and their provider at each clinical encounter. The Orthopaedic ambulatory clinic was selected for the pilot rollout because it is the largest and fastest-paced URMC clinic. Through a consensus process with care providers across the institution, it was agreed to utilize a shared assessment of PROMIS CATs for physical function, pain interference, and depression. These measures were selected as highly relevant in orthopaedic care as well as very relevant for most other disciplines. A shared set of measures also enabled study of global population health trends. UR VOICE, a homegrown software platform used to collect PROs in clinical care, was developed with the Assessment Center-API as the backbone to enable rapid assessment via PROMIS CATs with measures that could be utilized across practices. UR VOICE was integrated into the EHR (Epic), and a workflow was devised to collect measures on iPads within clinics. When a patient arrived for a clinic visit, a barcode was generated within the EHR that was linked to the patient record. Staff scanned the barcode with a tablet and handed it to the patient. The tablet administered assigned assessments and the results were immediately available to view in the patient chart. To minimize clinician burden, care providers were not required to review results. Tablets were cleaned using CaviWipes and cloth towels after each use. Washington University School of Medicine in St. Louis has a tripartite mission of patient care, research, and education. It is an institution with over 1,000 clinical faculty members serving the St. Louis region. PRO data was collected primarily for research purposes and captured via paper forms or REDCap. The vision for widespread collection of PROs was conceived by a department chair and enjoyed support from senior medical school leadership. Leadership committed financial and human resources by choosing an experienced business leader and IT leader. WUPRO, a web-based mobile friendly system, was built around the Assessment Center-API to handle all aspects of administering PROs to patients. An iterative development approach with daily meetings to discuss status, insights, and obstacles, produced a pilot minimum viable product 16 in clinics within 6 months of inception. The University of Utah Health System is a tertiary care hospital with approximately 1.7 million patient visits per year. It serves Utah and residents of five surrounding states with multiple external and specialty-focused centers. PROs were collected on paper during clinic visits. An initial webbased PRO capture system was created to administer PROs via an iPad. The resulting data displayed in a web-based report but not within the EHR. Although functional, this system did not support CATs. The introduction of the Assessment Center-API presented the opportunity to create mEVAL, an enhanced system that included a custom interface, custom functionality, access to PROMIS CATs, and integration with the EHR. The mEVAL pilot focused on spine, foot, and ankle Orthopaedic Center patients. The initial assessment included the PROMIS Physical Function CT, comorbidity questions, and specialty-specific questionnaires for spine, foot, and ankle patients. All data were collected in the clinic waiting room prior to appointment time and were administered via iPad. Getting the registration and clinic staff to consistently and reliably administer and complete the PROs was an unforeseen obstacle. Consequently, the system was modified in three ways to reduce staff burden. First, patients were alerted via email about scheduled assessments that could be complete prior to their appointment. Second, a QR code scanning process was added that decreased the time spent initializing the assessments. Other system modifications included adding a status for each assessment, so the clinic staff could easily determine if the patient had started/completed the assessment. This ensured that the assessment was not left in progress at the end of the visit. Overall, these modifications reduced staff burden and completion rates grew to average 80%. Within the orthopaedic clinic at the University of Rochester Medical Center (URMC), the iPad was administered on 97% of all patient visits. Patients completed all three CATs in 80% of initiated assessments. No major issues that disrupted the collection of PRO assessments occurred during the pilot. Due to high staff turnover, ongoing training was implemented to maintain a high frequency of assessment initiation. Through the course of the pilot, tablet management also required optimization. This included assigning devices to particular staff, adding tablet return bins in exam rooms and nursing stations and offering styluses. Because the Assessment Center-API offered pediatric and Spanish versions, UR VOICE was expanded to include a more diverse sample of orthopaedic patients. The pilot was deemed successful and additional funding was awarded to collect PROMIS in over 30 specialties. Between January 2016 and January 2021, UR Voice captured over 2.37 million assessments from over 868,000 patient encounters. At Washington University School of Medicine in St. Louis, close consultation with clinic staff produced an interface designed to minimize user burden, requiring four clicks or less to initiate a patient assessment session. This assignment process produces a unique, single-use URL which is then displayed as a QR code. The code is scanned with a clinic tablet and displays the assigned assessments in a standard web browser. The device is then handed to the patient. This entire process typically takes less than 15 seconds. A pilot study conducted in the orthopaedic surgery clinic found that assessment sessions consisting of 4 to 5 CATs averaged 30 to 60 seconds to complete and had a 99% completion rate. An Executive Oversight Committee was formed to guide implementation across the school, oversee funding, development, and provide guidance related to implementation and capture of PRO data. Further development added features and flexibility, aiming to accommodate differing clinic workflows and requirements. A configurable rules engine integrated with scheduling lets clinics automatically tailor a selection of PRO assessments to a patient based on age and visit type. Optionally, clinics can manually assign specific bundles based on a patient's specific circumstances. The web-based nature of the system allowed us to respond to the challenges of the COVID-19 pandemic by letting patients complete assessments on their personal phone or tablet devices, or in their home during a tele-health visit. Between January 2016 and January 2021, WUPRO captured over 2.95 million assessments from over 1 million patient encounters. Following the success of the pilot at the University of Utah Health System, the chair of Orthopaedics appointed a faculty committee to create guidelines for collecting PROs within the department. These guidelines included requiring that the PRO scores be pulled into the EHR, mandating use of the PROMIS Physical Function CT, and allowing for the addition of other instruments into the assessment by subspecialty. These guidelines were later financially incentivized based on overall subspecialty completion rates. Currently, work focuses on increasing provider and patient engagement, and implementing additional system enhancements. Between January 2016 and January 2021, mEVAL captured over 1.48 million assessments from over 535,000 patient encounters. CATs have not been natively supported in EHR systems. The Assessment Center-API integrates the administration and scoring of PROMIS CATs into EHR systems. This was achieved by developing the API as a RESTful web service. A recent release of the Assessment Center-API incorporated a Fast Healthcare Interoperability Resources 18 end point to better align with health care IT standards. All three sites were able to successfully integrate the Assessment Center-API with their EHR and modify staff and patient workflows to capture PROMIS CATs. In all cases, the success of the implementation led to the expansion across the health care system. Their implementations shared numerous features. First, all utilized multidisciplinary teams to identify useful measures and plan with a goal of limiting workflow disruption. All conducted iterative evaluation and facilitated optimization to address identified workflow problems. All focused on a single orthopaedics clinic prior to expansion to other areas. All used tablets distributed to patients in the waiting room to minimize previously documented workflow disruptions. 6 All sites created custom software enabling QR or bar codes to link PRO assessments with EHR data. This feature eliminated typing error and saved time initiating PRO assessments. All sites had support from institution leadership and the integration projects were initiated as process improvement for routine care. This approach to implementation supports previously published research documenting the importance of pre-implementation planning, addressing the impact on workflow, conducting workforce training, utilizing relevant measures, and inclusion of diverse teams with clinician champions. [19] [20] [21] [22] Conclusion Since CATs are not readily available natively in EHRs, the Assessment Center-API is a feasible solution for integration of PROMIS CATs in EHRs. All sites are conducting a pilot rollout in a single clinic to help identify issues to resolve prior to their institution-wide rollout. Institution leadership buyin was necessary for success. Formation of a committee of diverse faculty and staff provided guidance on implementation to reduce workflow disruption and increase engagement. Correct Answer: The correct answer is option a. PROMIS measures are symptom and functional based and not disease-specific. This makes PROMIS measures appropriate for use in patients with a wide range of chronic diseases and demographic characteristics. This enables individuals with multiple conditions to complete fewer PROs and for clinicians to compare scores across patients. 2. What statement is not true about how the three organizations integrated the AC-API PROMIS with their EHR? a. Organizations had organization leadership buy-in. b. Organizations started by offering assessments at home through their EHR portal. c. Organizations first rolled-out their implementation as a pilot program in one clinic. d. QR codes were used to transfer patient identification from the EHR to iPads before patients took their scheduled assessments. Correct Answer: The correct answer is option b. All organizations first started their integration implementation with in-clinic data collection on iPads. This work was conducted as part of routine clinical care at all sites; therefore informed consent was not required. None declared. Health status assessment. Completing the clinical database Patient-reported outcomes-harnessing patients' voices to improve clinical care Implementing patientreported outcome surveys as part of routine care: lessons from an academic radiation oncology department Bringing PROMIS to practice: brief and precise symptom screening in ambulatory cancer care Patient-reported outcomes and the evolution of adverse event reporting in oncology Provider perspectives on the integration of patient-reported outcomes in an electronic health record A clinically integrated mHealth App and practice model for collecting patient-reported outcomes between visits for asthma patients: implementation and feasibility PROMIS Cooperative Group. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult selfreported health outcome item banks Health status assessment for the twenty-first century: item response theory, item banking and computer adaptive testing Composervisual cohort analysis of patient outcomes Leveraging patient-reported outcomes using data visualization Application Development over Software-as-a-Service Platforms big"' web services: making the right architectural decision BYOD: One Instrument Developer's Technical Solution Assessment Center API: A Software Component Model for the Integration of Patient Reported Outcomes (PRO) into Clinical Care Research electronic data capture (REDCap)-a metadata-driven methodology and workflow process for providing translational research informatics support FHIR Release 4 website Accessed Implementing patientreported outcome measures into clinical practice across NSW: mixed methods evaluation of the first year Factors associated with increased collection of patient-reported outcomes within a large health care system Implementation of electronic patient-reported outcomes in routine cancer care at an academic center: identifying opportunities and challenges Planning for patient-reported outcome implementation: development of decision tools and practical experience across four clinics The Author(s) The authors would like to acknowledge all the hard work and dedication of the clinical staff and providers at the three sites for making the case report possible.