key: cord-0824084-8fns37hf authors: Costich, Julia F.; Vos, Sarah C.; Quesinberry, Dana B. title: Practitioners Assess Achievements and Challenges of Nonfatal Injury Surveillance date: 2021-12-06 journal: J Public Health Manag Pract DOI: 10.1097/phh.0000000000001464 sha: 1a80da640e2cc1ac69f67ab56c553b3509a1ceef doc_id: 824084 cord_uid: 8fns37hf OBJECTIVE: Injury surveillance relies on data coded for administrative rather than epidemiological accuracy. The Centers for Disease Control and Prevention (CDC) established the 5-year Surveillance Quality Improvement (SQI) initiative to advance consensus and methodology for injury epidemiology reporting and analysis. Evaluation of the positive predictive value of the CDC's injury surveillance definitions based on International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) and International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) coding in designated injury categories comprised much of the SQI initiative's work. The goal of the current study is to identify achievements and challenges in SQI as articulated by experienced injury epidemiology practitioners who participated in the CDC-funded SQI initiative. DESIGN, SETTING, AND PARTICIPANTS: We conducted semistructured interviews with 12 representatives of state and federal public health agencies who had participated extensively in the SQI initiative. The interviews were transcribed and coded using NVivo qualitative analysis software. Initial coding of the data involved both in vivo coding (using the words of participants) and coding of a priori themes. MAIN OUTCOME MEASURES: Qualitative analysis identified 2 overarching themes, variability among states and observations on the science of injury surveillance. RESULTS: Within the 2 broad themes, the respondents provided valuable insights regarding access to medical records, case definition validation, unique contributions of medical record abstracting, variations in the practice of medical coding, and the potential for use of data from medical record reviews in other injury-related areas. CONCLUSIONS: The contributions of the SQI initiative have provided valuable insights into ICD-10-CM case definitions for national injury surveillance. Challenges remain with regard to data access and quality with ongoing reliance on administrative datasets for injury surveillance. T he Centers for Disease Control and Prevention (CDC) has funded 2 waves of collaboration (2011-2016 and 2016-2021) with selected state public health agencies to improve the quality of nonfatal injury surveillance. The fundamental challenge of nonfatal injury surveillance is that, like most epidemiological initiatives, it has no dedicated data-gathering mechanism. Instead, surveillance depends on administrative claims data that are created to bill for hospital emergency department and inpatient care. 1 Specialists code these data using the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM). 2, 3 Coders use guidelines and software to identify detailed diagnoses in clinicians' documentation and assign the most appropriate codes. For epidemiological analysis and reporting, injuries are classified by nature (eg, concussion, fracture, burn), body part, cause, and intent. 2 Data sets are typically available for analysis no sooner than a year after the dates of service. 4 Injury data must be coded consistently over time to support longitudinal comparisons. For example, policy makers need to know whether policy initiatives are achieving their intended goals or whether new products are associated with increases in specific types of injuries. When coding systems change, data analysts must develop crosswalks between the old and new systems to ensure that each code is capturing the same type of case. The ICD-10-CM greatly expanded the number of injury codes available, from about 2600 in its precursor, International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), to some 43000. 5, 6 This expansion should support much more nuanced injury epidemiology, but it also creates methodological barriers to consistent analysis over time. 4 The overarching goal of the Surveillance Quality Improvement (SQI) initiative was to advance consensus and methodology for injury reporting and analysis. A significant part of this work was evaluation of the positive predictive value of the CDC's injury surveillance definitions based on ICD-9-CM and ICD-10-CM coding in nationally prioritized injury categories, including injuries to young children, older adult falls, opioid overdoses, traumatic brain injury (TBI), and self-harm, as well as issues in the transition between coding systems. The positive predictive value metric compares the codes assigned to individual patient records with the medical record documentation to quantify the proportion of records that are actual cases falling within either the broad case definition (eg, TBI) or the definition of a specific code. 7 This approach is used for ICD-10-CM case definitions because there is no "gold standard" against which to assess coding accuracy. 7 Inaccurate coding in the context of injury surveillance does not reflect negatively on a coder's expertise because coders' work aims for accuracy in billing and reimbursement rather than diagnostic specificity. Particularly in the context of ICD-10-CM injury codes, the range of coding options far exceeds the range of billed and reimbursed charges. 4 The work of the 4 SQI participant states and the CDC subject-matter experts during the study period from 2016 to 2021 focused on difficult case definitions, including those for TBI, self-harm, and perinatal injury, as documented in several peerreviewed publications. 4, [8] [9] [10] [11] [12] Related studies addressing drug overdose cases helped lay the groundwork for the SQI initiative 13 along with preparatory studies undertaken by epidemiologists in Colorado and Massachusetts. The Council of State and Territorial Epidemiologists' Injury Surveillance Workgroups provided essential guidance for the development of SQI methods and strategies. 2 The goal of this study is to identify achievements and challenges in SQI as articulated by the injury epidemiology practitioners who participated in the CDC-funded SQI initiative. Peer-reviewed publications, white papers, and conference presentations have articulated findings from individual SQI studies, whereas the present analysis is unique in addressing the SQI project as a whole. Twelve representatives of state and federal public health agencies who had leading roles in their agencies' SQI work were invited to participate in the study. The study was approved under expedited review by the University of Kentucky Institutional Review Board (protocol # 63751). The participants consented to having their anonymous observations quoted or paraphrased. Project investigators conducted 2 small group interviews and 1 individual interview with federal officials, as well as 2 small group interviews and 2 individual interviews with state agency staff. Six federal officials and 6 state-level officials participated in the interviews. All were conducted using Zoom videoconference technology. The semistructured interviews followed a protocol in which the participants were asked to describe their role in the project, reflect on aspects of the project that made it more or less difficult, and identify opportunities for future work to support positive change. Video interviews were recorded and transcribed; the lead interviewer reviewed transcripts for accuracy. We then employed Nvivo software (March 2020 version) to conduct a thematic analysis. Transcripts were first coded in vivo (using the words of participants), and the coded documents were analyzed to identify a priori themes. 14 The research team then identified 2 overarching themes: variability among the states and insights regarding the science of injury surveillance. The data were then recoded using a constant comparison method to refine these themes. The participants noted that variations in state data access policy and practice contributed to the difficulty of the project because original electronic health records were required to validate the ICD-10-CM codes. As one federal official noted, the protocol for an SQI project "sounds easy": Obtain the medical records from a particular state and then examine the ICD-10-CM codes of interest. However, that "simple concept gets much harder to actually implement because of the state-to-state differences." These differences arose both in the administrative data sets and in medical record abstraction. "One of the places where we really struggle when it comes to something like health data systems or even syndromic surveillance . . . is this problem of the disparate ways in which it's collected across states and jurisdictions," said another federal official. This variation is common and well documented in other areas. 15, 16 The contribution of the SQI initiative was to develop consensus approaches to overcome the obstacles posed by variability among the states. States differed in their ability to obtain medical records for validation of ICD-10-CM coding, with regard to both the population covered and the completeness of records. This variation made the development of a consensus abstraction form critical to the production of consistent, coherent project findings. For example, 2 of the 4 states had legal authority to request records from all state facilities, while the other 2 cooperated with trauma centers serving broad geographic areas. The structure of the records themselves also varied within and among states. Each state oversaw abstraction of its own records, and data analysts had different types of expertise. "It's challenging to have different groups of folks look at a medical record and pick out different things," noted one federal official. So one group will have trauma docs, another will have residents, others will have medical records folks doing the review. My understanding is that there are different skill sets among these different groups of people who are doing the medical record review and that they have different sets of eyes looking for things. Some of the variability in abstractors was traceable to resources (including staffing) available to the different state teams. One state participant noted that the CDC was surprised that their state employed contractors to access the medical records themselves. "Turnover in staffing and lack of full staffing was problematic," the state participant said. Another state had access to trauma surgeons and emergency department physicians to help with the abstracting. That state used the clinical experts to help design the abstracting protocol. Each SQI project began with expert interviews "to understand the subject area and the important elements." "Injury epidemiology background alone is not sufficient to carry out this type of study," the respondent noted, and "we were able to improve the way we ask questions." Several key informants recommended involving clinical practitioners in future studies. "There is no easy way for non-physicians to judge whether a case is appropriate to confirm based on the medical record. You can use clinical judgment, or you can entirely rely on physician notes," the participant noted. Other state teams lacked access to this expertise. "It was often an afterthought when we hit a problem; we asked any of the state reps, 'Do you have a clinician that would answer a question or two?'" recalled one state respondent. To reconcile variation in medical records and the abstracting process, state teams used a consensus process that included input from all states and the CDC officials. For example, TBI diagnoses can include groups of signs and symptoms in the absence of symptomatic or imaging evidence. Knowledge created through these discussions was valuable, one federal official explained. Those conversations, even though sometimes it feels like it's in the minutiae, I think it's really important for the people who are abstracting these records and to get good quality results. It challenges people to think through those hard topics together. US health systems use ICD-10-CM to support billing to third-party payers; public health injury surveillance is a secondary use of the data. One federal official noted that because the data were not designed for injury surveillance, there may be coding characteristics related to billing that distort findings. Creating a separate tracking system for injury surveillance is not feasible, said another federal official. There are just way too many injuries-it's not like infectious disease where you can just create a new surveillance system, or you can use the electronic lab results to create a surveillance system. And so we're forced to be secondary data users. And where, I think, in applied public health in state government, we often have to say, "What we do have?" The question, the official continued, is not just what data are available but whether the data are valid. The respondents emphasized the importance of the SQI initiative to validate surveillance measures and identify alternative measures for injury surveillance. A federal official stated: "What's really important is that this work has informed the discussion, and you can't just walk away from knowing that positive predictive value." "So, I mean, we will get numbers every day . . . . Does that number physically represent the underlying problem? Are we over-counting? Are we under-counting, or did we mis-count?" The SQI projects validated previously identified issues with some codes, in particular, the code for unspecified head trauma in the case definition for TBI. In the ICD-9-CM coding era, emergency departmentbased studies found that "unspecified injury to the head" codes comprised a majority (50%-58%) of TBI cases in their samples. [17] [18] [19] However, the multistate SQI team found that in ICD-10-CM, exclusion of the code for "unspecified injury to the head" (S09.90) from the CDC's TBI surveillance definition would likely exclude some true TBI cases. 8, 11 The SQI studies also addressed issues in the interpretation of emergency department-based data. The brevity of emergency department care impedes the kind of nuanced analysis that is available from inpatient data. However, most injuries do not require hospitalization, so reliance on inpatient data alone would miss the majority of injury-related care encounters. Capturing adequately reliable data in the emergency department is an ongoing challenge. "For us, I think of it as being part of this ongoing conversation of what is emergency department data good for?" a federal official said. So there's a difference between whether you want to count in the emergency department group just confirmed cases or cases where an injury diagnosis was ruled out. The [code for unspecified injury to the head] conversation is a piece of what is driving this question on a broader scale, the need to clarify terminology. This also comes up because of all the cases that we now include out of the emergency department that have an external cause [of injury] code that don't end up with an injury diagnosis code. Our respondents reported that the research teams decided that, for inpatient injury hospitalizations, only principal diagnosis codes should be used as part of the case definitions for injuries: "We had done analysis to show that, if it didn't have that [principal diagnosis code], it would be less likely to be a reason for that hospitalization," another federal official noted. The SQI findings point to the need to validate diagnosis codes for other injury surveillance categories. "We need to know what it means for those sexual assault codes," said a federal official. "We need to know those child abuse codes. What do they hold? Right? There's more work to be done in many categories of codes." Several respondents noted that the process developed by the SQI projects can be applied to other injury codes for surveillance validation. One federal official expressed the hope that the work of these projects would inform the development of "a standardized case definition for intentional self-harm . . . and, although that's not necessarily the focus of this project, all of the components [of the ongoing research] are leading up to development of that case definition." Three state respondents noted the need to share SQI findings with medical coders and develop a relationship that involves them in surveillance process improvement. One state respondent discovered this need after a presentation to her state's organization for medical record coders. "I was shocked to learn that the medical record coders have no idea how this information is used, that the E codes [mechanism of injury codes in ICD-9-CM] are used, and they have no idea that we totally rely on this information for injury surveillance," the respondent said. "It would be nice if there were some funding that came about at some point that allowed us to really strengthen those relationships with the medical record coders in our state." Another state respondent noted that her team regularly presented to state medical coders, with beneficial results for their injury surveillance projects. "I will say that because of that, the professional coder community in [the respondent's state] took on the responsibility of creating some injury tools for them," the respondent said. "So there's this commitment from the coding professionals to code accurately and code comprehensively. And they actively worked on using their own processes and the collective impact of their approach to improve quality of coding." A third state developed a free accredited online training module on ICD-10-CM injury coding that trains users to locate, identify, review, and assign the external causes codes found in Chapter 10 of the official ICD-10-CM coding guidelines. 3 Medical record abstraction to validate injury surveillance is not widely practiced, noted one federal official, but the process may lend itself to other, relevant research. "So, if we're reviewing whatever-200 medical records for suicide-related things and selfharm-related things-are there other things we could check those medical records for outside of suiciderelated stuff?" the official noted. Are there other questions that we should be answering or be using this opportunity to look at? And I don't really have that question or thought, like, fleshed out enough to give you something concrete. But I just think that this SQI work [ie, the chart abstracting process] is so important and it's such a rich source of data and information. Is there any way that we could be maximizing the work that's already going into it? The CDC SQI initiative has yielded valuable insights into the use of ICD-10-CM codes for injury case definitions. However, the SQI initiative's meticulous examination of ICD-10-CM coding has also identified opportunities for improvement. Recently proposed regulations from the Centers for Medicare & Medicaid Services (CMS) would help address one of these problems: public health agency access to electronic health records. Under current federal regulations, hospitals and other providers are not required to send information to public health agencies electronically. The proposed regulations would require participants in some of the CMS enhanced payment programs to provide public health agencies with more timely access to electronic data so as to address public health issues, such as COVID-19 cases. 20 Another example of more coherent use of electronic health information for public health surveillance is the National Institutes of Health's National COVID Cohort Collaborative (N3C), which has supported advances in pandemic analysis that are intentionally structured to overcome the fragmentation of US health data. 21 The urgency of public health surveillance improvement in the context of the COVID-19 pandemic was recently highlighted by American Public Health Association President Georges Benjamin, who stated: "Our surveillance systems are outmoded, and our data reporting systems are truly out of date." 22 Timely and complete access to electronic health information would certainly enhance surveillance, but many local health agencies lack the capacity to process electronic health data; reliance on fax transmissions illustrates this deficiency. Interoperability remains elusive where electronic data systems are available at all for public health agencies, and the technology for interpreting such voluminous data ■ Current surveillance for injury epidemiology relies heavily on ICD-9-CM-and ICD-10-CM-coded administrative claims data for emergency department visits and inpatient hospitalizations. ■ Key informants have identified achievements and challenges related to the quality of data supporting ICD-10-CM-based case definitions for injury. ■ The COVID-19 pandemic response has highlighted the urgent need for timely and reliable public health surveillance. ■ Public health interventions can be effective only if they are based on dependable public health surveillance data. Data quality improvement requires ongoing support for analyses by injury epidemiologists. is still under development. Using either the types of transmissions envisioned in the proposed Centers for Medicare & Medicaid Services regulations or full EHR data for injury surveillance will require substantial new investment. The development of reliable machine learning strategies and reporting capacity that yields actionable findings has been galvanized by the COVID-19 pandemic but is still a work in progress. In the interim, administrative data are likely to serve as the foundation of injury surveillance, and the potential mismatches among claims data elements and injury surveillance elements of interest will require the type of analysis exemplified by the SQI initiative. Use of administrative medical databases in population-based research The transition from ICD-9-CM to ICD-10-CM: guidance for analysis and reporting of injuries by mechanism and intent Centers for Medicare & Medicaid Services, National Center for Health Statistics. ICD-10-CM official guidelines for coding and reporting-FY 2020 Use of ICD-10-CM coded hospitalisation and emergency department data for injury surveillance Proposed framework for presenting injury data using the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codes The International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) external cause-of-injury framework for categorizing mechanism and intent of injury What does validation of cases in electronic record databases mean? The potential contribution of free text Multisite medical record review of emergency department visits for traumatic brain injury An emergency department medical record review for adolescent intentional self-harm injuries Validation of ICD-10-CM codes for injuries complicating pregnancy, childbirth and the puerperium: a medical record review Multisite medical record review of emergency department visits for unspecified injury of head following the ICD-10-CM coding transition Validation of ICD-10-CM surveillance codes for traumatic brain injury inpatient hospitalizations Interrupted time series design to evaluate the effect of the ICD-9-CM to ICD-10-CM coding transition on injury hospitalization trends The Coding Manual for Qualitative Researchers Measuring complications of serious pediatric emergencies using ICD-10 Heterogeneity introduced by EHR system implementation in a de-identified data resource from 100 nonaffiliated organizations Children and youth with "unspecified injury to the head": implications for traumatic brain injury research and surveillance Accuracy of mild traumatic brain injury case ascertainment using ICD-9 codes Validity of administrative data for characterizing traumatic brain injury-related hospitalizations Proposed federal rules could transform how electronic health records support care. Pew Charitable Trusts It took a pandemic, but the US finally has (some) centralized medical data How the US failed to prioritize SARS-CoV-2 variant surveillance