key: cord-1051862-kvugftgu authors: Bruning, Andrea H L; Leeflang, Mariska M G; Vos, Johanna M B W; Spijker, Rene; de Jong, Menno D; Wolthers, Katja C; Pajkrt, Dasja title: Rapid Tests for Influenza, Respiratory Syncytial Virus, and Other Respiratory Viruses: A Systematic Review and Meta-analysis date: 2017-09-15 journal: Clin Infect Dis DOI: 10.1093/cid/cix461 sha: 32525fa4a25bc693ba30658fab0e6c8dbd3feccd doc_id: 1051862 cord_uid: kvugftgu Rapid diagnosis of respiratory virus infections contributes to patient care. This systematic review evaluates the diagnostic accuracy of rapid tests for the detection of respiratory viruses. We searched Medline and EMBASE for studies evaluating these tests against polymerase chain reaction as the reference standard. Of 179 studies included, 134 evaluated rapid tests for influenza viruses, 32 for respiratory syncytial virus (RSV), and 13 for other respiratory viruses. We used the bivariate random effects model for quantitative meta-analysis of the results. Most tests detected only influenza viruses or RSV. Summary sensitivity and specificity estimates of tests for influenza were 61.1% and 98.9%. For RSV, summary sensitivity was 75.3%, and specificity, 98.7%. We assessed the quality of studies using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) checklist. Because of incomplete reporting, the risk of bias was often unclear. Despite their intended use at the point of care, 26.3% of tests were evaluated in a laboratory setting. Although newly developed tests seem more sensitive, high-quality evaluations of these tests are lacking. Acute respiratory tract infections (RTIs) are a leading cause of disease and death worldwide [1] . Recent studies suggest more than half of RTIs are caused by viruses, even in severely ill patients [2] [3] [4] . Commonly detected viruses include influenza viruses, respiratory syncytial virus (RSV), adenovirus, human metapneumovirus (hMPV), human parainfluenza viruses, coronaviruses, and rhinoviruses [5, 6] . Clinical signs and symptoms of viral RTIs overlap with those of bacterial infections. It is challenging to clinically distinguish bacterial from viral infections and different viral pathogens [7] . This diagnostic uncertainty leads to overprescription of antibiotics and extra diagnostic testing (along with costs) to rule out bacterial infections [8, 9] . Rapid detection of viral pathogens could overcome these disadvantages. Besides, prompt viral diagnosis may lead to rapid implementation of infection control measures, early administration of antiviral medication, if available, and shorter hospital stays, resulting in reduced healthcare costs [10] [11] [12] . For this reason, rapid diagnostic or point-of-care tests have been developed. Compared with other diagnostic modalities-culture, polymerase chain reaction (PCR), or immunofluorescence testing-point-of-care tests are often faster, less expensive, easier to use and accessible to staff without laboratory training. They have the potential to be carried out at or near the point of care. There is a clear trend toward point-of-care testing, and the number and quality of rapid diagnostic tests for respiratory viruses has rapidly increased [13] . For clinicians, it is important to be aware of the diagnostic accuracy of the different rapid viral tests, the factors that affect their accuracies, and test performances in daily practice. Previous systematic reviews addressing the diagnostic accuracy of respiratory tests either evaluated only a single respiratory virus (influenza or RSV) or were conducted in specific populations [14] [15] [16] . The aim of our review was to provide a state-of-the-art overview of all available rapid tests for the detection of respiratory viruses in patients of all ages with RTIs. We systematically summarized the available evidence on their diagnostic accuracy for virus detection compared with PCR testing. We assessed the quality of included studies and quantitatively summarized the results in a meta-analysis. This systematic review was built on a protocol based on the Preferred Reporting Items for Systematic Review and Meta-Analyses for Protocols 2015 (PRISMA-P 2015), which is registered in the Prospero database (registration No. CRD42015024581). A systematic literature review was conducted by searching Medline and EMBASE electronic databases, through the Ovid interface, from inception to 18 January 2016. The reference lists of all articles were hand searched for additional studies. The search strategy was developed in collaboration with a medical information specialist (R. S.) and contained search terms for the most common respiratory viruses (influenza or respiratory syncytial virus or metapneumovirus or parainfluenza virus or human adenovirus or human rhinovirus or human bocavirus or human coronavirus) combined with search terms for rapid diagnostic tests (diagnostic kit or antigen [test or detection] or reagent or immunotest or point-ofcare systems or rapid, simple, easy, or quick test), including brand names for the most common commercial rapid tests. The complete search strategy is shown in Appendix A. After removal of duplicates, the search results were imported into Covidence, a Web-based software platform that streamlines the production of systematic reviews (https://www.covidence.org/). Studies were considered for inclusion if they were written in English or Dutch and reported original data regarding the accuracy of a rapid test for ≥1 respiratory virus compared with PCR. Although standardization across PCR methods is currently lacking, we selected PCR as the appropriate reference standard because of its current status as the reference standard method for detecting respiratory viruses [17] . Rapid tests were defined (adapted from the World Health Organization simple/rapid tests definition [18] ) as any commercially available quick (up to 2 hours) and easy-to-use test requiring little or no additional equipment or technological skills. Only original studies were included. Studies evaluating in-house tests and precommercial versions of rapid tests were excluded, as well as studies using the rapid test result as part of a composite reference standard (to avoid incorporation bias) or performing PCR solely on samples in which rapid test results were negative (to avoid partial verification bias). Case-control studies (use of the rapid test on previously tested known positive or negative samples), case reports, conference abstracts, reviews, and veterinary studies were excluded. Two reviewers (A. H. L. B. and D. P.) independently assessed inclusion eligibility. Initial selection was based on screening of title and abstract. Full-text versions were further assessed for eligibility, and reasons for exclusion were documented. independently extracted data. Information collected included author, year of study and publication, country, study population, specimen type, setting, performance of the test in a laboratory or at the point of care, brand name, and data required to calculate sensitivity and specificity. With these data we constructed 2 × 2 contingency tables. If 2 × 2 tables could not be extracted or calculated from the published data, we contacted authors for additional information. In some articles, the same sample sets were tested with different rapid tests. Each test comparison was considered a separate analysis, resulting in a total number of included studies higher than the actual number of included articles. The methodological quality of studies was independently reviewed by 2 authors (A. H. L. B. and J. M. B. W. V) using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria, as recommended by the Cochrane Collaboration [19] . Disagreement about inclusion, data extraction, and quality assessment was solved by consensus. With the 2 × 2 tables, we estimated the sensitivity and specificity of the rapid tests. Each respiratory virus can be regarded as a separate target condition. Therefore, we analyzed the accuracy of the tests for each virus separately. Studies evaluating rapid tests for influenza viruses sometimes reported multiple 2 × 2 tables for influenza A and B separately. However, most of the included tests detected both influenza subtypes and made no distinction between subtypes. We therefore used "any influenza" as our main analyses and evaluated test accuracy in subtypes in subgroup analyses. Sensitivity and specificity estimates were pooled into receiver operating characteristic plots. Because sensitivity and specificity were negatively correlated, the meta-analysis should be performed on both outcome measures simultaneously. For this, we used a bivariate random effects meta-regression model [20] . To investigate the effect of potential sources of heterogeneity, we added the following covariates to the model: study population (children vs adults), commercial brand of the rapid test, and whether or not the test was performed at the point of care. We calculated summary sensitivity and specificity estimates for each covariate. To assess the effect of study quality, we performed sensitivity analyses. All analyses were performed using SAS software, version 9.4. Our literature searches identified 3527 articles. After screening, 383 articles were eligible for full-text review, of which 258 were excluded ( Figure 1 ). The main reasons for exclusion were that the rapid test was evaluated in a case-control study, the test could not be performed rapid, the study did not include original data, a reference test other than PCR was used, or the study was not reported in English or Dutch. Because several reports evaluated >1 test, 179 separate studies were included in the systematic review and meta-analysis. Of these, 134 studies (74.9%) evaluated rapid tests for influenza viruses, 32 (17.9%) for RSV, and the remaining 13 (7.3%) for other respiratory viruses (hMPV, adenovirus, and parainfluenza virus; Table 1 ). Supplementary Table 1 provides the main characteristics of included studies. Supplementary Figure 1 displays the accompanying forest plots of the estimated sensitivities and specificities. Table 1 shows that a total of 50 different rapid tests were evaluated. The most frequently studied tests for influenza were Alere BinaxNOW Influenza A&B Test, Quidel QuickVue Influenza A+B, BD Directigen EZ Flu A+B, and Quidel Sofia Influenza A+B FIA. For RSV, BD Veritor System, Alere BinaxNOW RSV Card, and Quidel Sofia RSV FIA were evaluated most frequently. Most studies were conducted in either children or populations including both adults and children. Of the studied rapid tests, 26.3% were evaluated at the point of care. An overview of the risk of bias and concerns regarding applicability of included articles is presented in Figure 2 . Because of our inclusion and exclusion criteria, studies were free of spectrum, incorporation, and partial verification bias, and all included studies used an appropriate reference standard. Nevertheless, several study characteristics might incorporate a risk of bias. Because inclusion and exclusion criteria were often defined unclearly or not at all, the risk of bias regarding patient selection was difficult to assess. In at least 37 articles (29.6%), the index test was performed at the point of care, suggesting that index test results were interpreted without knowledge of reference standard results. In 8 of these articles, additional information was available on the clinical feasibility or logistical organization of the test. Owing to frequent incomplete description of study characteristics that might result in a risk of bias, these items were difficult to assess, resulting in an unclear risk. The concerns regarding applicability were generally low. For influenza, sensitivity ranged from 4.4% to 100.0%. For all rapid tests that can detect influenza, that summary estimate was 61.1% for sensitivity (95% confidence interval [CI], 53.3%-68.3%) and 98.9% for specificity (98.4%-99.3%). Operating test characteristics of the studies regarding influenza are shown in Figure 3A . For RSV, sensitivity had more variation than specificity, but sensitivity estimates were higher than for influenza, ranging from 41.2% to 88.6% ( Figure 3B ). The summary sensitivity and specificity for all studies evaluating RSV were 75.3% (95% CI, 72.6%-77.8%) and 98.7% (97.3%-99.4%), respectively. Owing to the small number of included studies for adenovirus, hMPV, and parainfluenza virus types 1-3, meta-analysis was not performed. The sensitivities of rapid tests for influenza A (68.1%; 95% CI, 58.9%-76.0%) and influenza B (71.0%; 56.8%-82.1%) were comparable. Several studies (n = 58) only evaluated the diagnostic accuracy of rapid tests for detecting H1N1. Most of these rapid tests were not developed to detect H1N1 specifically. The sensitivity for H1N1 was generally lower (54.0%, 95% CI 47.6-60.3) than for the other virus subtypes ( Table 2) . The diagnostic test accuracy for influenza was significantly decreased when tests were performed in adults (sensitivity, 34.1%; 95% CI, 14.0%-54.1%), compared with performance in children or a mixed population (P < .01). For RSV, age did not significantly influence performance characteristics (P = .20). To identify the rapid test with the best performance characteristics, we evaluated pooled summary estimates for each test separately. This was only possible for tests with ≥4 studies. For influenza, mariPOC and Sofia Influenza A+B FIA had the best overall performance, with summary sensitivities of 76.1% and 75.3%, respectively, and summary specificities of 99.4% and 95.3%, although in some studies [21] the specificity for Sofia Influenza A+B FIA was lower. For RSV, an estimated sensitivity and specificity per test could be calculated only for BD Veritor RSV, Sofia RSV FIA, and BinaxNOW RSV. Sofia RSV FIA had the best overall performance. For both influenza and RSV, diagnostic test accuracy was not influenced by setting-that is, by whether the test was performed at the point of care or in the laboratory. The risk of bias for studies evaluating influenza was generally low. We assessed the difference between studies with low risk of bias for the patient domain and those with an unclear risk of bias in this domain. Sensitivity was generally lower in studies with a low risk of bias (53.3%; 95% CI, 42.1%-64.1%). For RSV, the investigated quality criteria did not have a statistically significant effect on diagnostic test accuracy. In this systematic review and meta-analysis we provided an overview of the rapid tests that are available for the detection of respiratory viruses in patients with RTIs. The sensitivity of these tests varied considerably, but specificity was high. With the result of the rapid test it is thus possible to rule in a respiratory viral infection, but false-negative results are common. The performance of rapid tests for RSV was generally superior to the performance for influenza. A major advantage of rapid tests is their potential to be performed in a nonlaboratory setting, but only 26.3% were evaluated at the point of care. Although diagnosing RTIs requires a syndromic approach because symptoms of respiratory viruses infection overlap, only few rapid tests currently available can simultaneously detect multiple viruses. Most rapid tests solely detect influenza viruses or RSV. This is historically understandable, because these viruses were considered the most important respiratory viruses. Furthermore, availability of antivirals for influenza renders rapid diagnostic testing of influenza of high priority. However, recent studies indicate that other respiratory viruses, such as rhinoviruses and hMPV, can also cause severe respiratory illness and are sometimes detected at higher frequencies than influenza and RSV [22, 23] . The results of our heterogeneity investigation show that rapid test performance is comparable for influenza A and influenza B, which is in line with findings of a previous meta-analysis [14] . Most of the included studies were performed in children, especially those evaluating rapid tests for RSV, which at least partly explains the lack of influence of age on the diagnostic accuracy estimates for RSV. For influenza, however, rapid test performance was significantly better in children, as reported elsewhere [24, 25] . Age is inversely associated with viral load, which may explain better test results in children. Testing at the point of care did not influence diagnostic test accuracy, although this finding should be interpreted with caution. In many studies, the test was not evaluated at the point of care, nor was the setting or personnel described. Direct headto-head comparisons between tests performed at the point of care and in the laboratory were limited. Besides, only 8 of the 37 articles describing tests performed at the point of care included information on the clinical feasibility of the test-that is, whether the test was easy to use, how rapid test results were communicated, or how the personnel performing the rapid tests evaluated its usefulness. By estimating the pooled sensitivities of the rapid tests, we aimed to determine which test performed best. The conclusions should be interpreted cautiously. Because of our random-effects model, we could not evaluate all rapid tests separately, and the newly developed molecular testing devices were evaluated only sporadically in prospective cross-sectional studies [26, 27] . In addition, rapid test kits from the same company might undergo manufacturing changes. When we analyzed only high-quality studies (ie, with a low risk of bias), the overall sensitivity decreased, implying an even lower diagnostic accuracy for rapid tests. Diagnostic accuracy studies are described in different ways, and there is no standard terminology available [28] . To include all available evidence, we constructed our search strategy as broadly as possible. Data were extracted, and the quality of included studies was assessed by 2 independent reviewers, decreasing the risk of subjectivity to a minimum. In the event of methodological shortcomings-for example, incorporation bias or partial verification bias-diagnostic test performance might be underestimated or overestimated, which we tried to avoid by applying strict exclusion criteria. We deliberately chose to include only studies in which PCR was used as the reference standard, because of its status as the reference standard method for detecting respiratory viruses [17] . This important strength of our systematic review provides a more realistic accuracy estimate for rapid tests. In previous studies using viral culture or immunofluorescence as a reference, pooled sensitivities were 9%-14% higher than in studies that used PCR, and test accuracy was overestimated [15] . We did not assess publication bias because no accurate reporting methods for diagnostic test accuracy studies exist [28] . The accuracy of a systematic review depends on the quality of the studies included. As demonstrated in our quality assessment, many items regarding the risk of bias were unclear. Sensitivity and specificity calculations are extremely sensitive to the design of a study and influenced by many factors, such as virus prevalence, predominant circulating virus strain, time from illness onset to sample collection, type and quality of respiratory specimen, age, and disease severity [11, 29] . In many studies that evaluated specific rapid tests, these items were lacking or unclear. This is a major limitation in diagnostic accuracy studies in general. Comparisons between tests should be ideally performed in the same study, against the same reference standard, and in the same patients. In the current era of emerging novel respiratory viruses, there is a growing need for rapid, sensitive, and specific identification of viral pathogens to allow effective prompt antimicrobial therapy, decrease extra diagnostic testing, and implement pathogen-specific infection control measures. Rapid tests have the potential to fulfill these needs, but one should be aware of their current limitations in diagnostic performance and range of pathogens identified. More sensitive and specific rapid multiplex molecular assays are in development. They have the potential to rapidly and accurately identify not only respiratory viruses but also bacteria. Fully automated molecular methods are commercially available and presented as designed to be operated at the point of care [30] . Although these newer tests seem to perform comparably to laboratory-operated PCRs, comprehensive state-of-the-art clinical evaluations are lacking. Besides, in addition to high costs and low sample throughput, major drawbacks of these newer diagnostic devices are the technical complexity of the tests and their dependence on electronic devices [31, 32] , limiting their opportunities for direct point-of-care use. In the upcoming years, the nonmolecular rapid tests will still have a role in practical patient care. It is important for clinicians to be aware of their availability and performance characteristics. Our systematic review provides a helpful tool in this understanding. Although some studies have already evaluated the impact of rapid diagnostics on patient management [33] [34] [35] , randomized controlled trials are needed to assess the clinical relevance of rapid tests in terms of clinically relevant outcomes, such as antibiotic use, length of hospital stay, and cost efficiency. In addition, point-of-care testing requires novel strategies for logistics and organization [36] . For successful implementation of rapid tests in the clinic, a laboratory test validation is not sufficient and a high-quality evaluation at the point of care is required. Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author. Financial support. This work was supported by the European Union's Seventh Framework People Programme (Research Executive Agency grant 612308). Disclaimer. The funding source had no role in study design, collection, analysis, or interpretation of the data or writing of the report. Potential conflicts of interest. All authors: No reported conflicts of interest. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed. Global, regional, and national age-sex specific all-cause and cause-specific mortality for 240 causes of death, 1990-2013: a systematic analysis for the Global Burden of Disease Study CDC EPIC Study Team. Communityacquired pneumonia requiring hospitalization among U.S. children Lower respiratory tract virus findings in mechanically ventilated patients with severe community-acquired pneumonia Respiratory syncytial virus and other respiratory viral infections in older adults with moderate to severe influenza-like illness CDC EPIC Study Team. Communityacquired pneumonia requiring hospitalization among U.S. adults Viral etiology of acute respiratory tract infections in children presenting to hospital: role of polymerase chain reaction and demonstration of multiple infections Out-of-hours antibiotic prescription after screening with C reactive protein: a randomised controlled study Emergency department management of febrile respiratory illness in children Rapid viral diagnosis for acute febrile respiratory illness in children in the emergency department Health care resource utilization and costs for influenza-like illness among midwestern health plan members Rapid antigen-based testing for respiratory syncytial virus: moving diagnostics from bench to bedside? Impact of the availability of an influenza virus rapid antigen test on diagnostic decision making in a pediatric emergency department Point-of-care testing for respiratory viruses in adults: The current landscape and future potential Accuracy of rapid influenza diagnostic tests: a meta-analysis Diagnostic accuracy of rapid antigen detection tests for respiratory syncytial virus infection: systematic review and meta-analysis Influenza diagnosis and treatment in children: a review of studies on clinically useful tests and antiviral treatment for influenza Molecular diagnosis of respiratory viruses World Health Organization. In vitro diagnostics and laboratory technologysimple/rapid tests QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews Challenges with new rapid influenza diagnostic tests ESCAPED Study Group. Viruses detected by systematic multiplex polymerase chain reaction in adults with suspected community-acquired pneumonia attending emergency departments in France Emerging respiratory viruses other than influenza Factors affecting QuickVue Influenza A + B rapid test performance in the community setting Performance of rapid influenza diagnostic tests (QuickVue) for influenza A and B infection in India Performance of the Cobas(®) influenza A/B assay for rapid PCR-based detection of influenza compared to prodesse ProFlu+ and viral culture Diagnostic performance of near-patient testing for influenza Systematic reviews and meta-analyses of diagnostic test accuracy Guidance for clinicians on the use of rapid influenza diagnostic tests Comparison of the Biofire FilmArray RP, Genmark eSensor RVP, Luminex xTAG RVPv1, and Luminex xTAG RVP fast multiplex assays for detection of respiratory viruses Stat" multiplex polymerase chain reaction: its role in clinical microbiology Comparison of the FilmArray respiratory panel and prodesse real-time PCR assays for detection of respiratory pathogens Impact of early detection of respiratory viruses by multiplex PCR assay on clinical outcomes in adult patients Economic analysis of rapid and sensitive polymerase chain reaction testing in the emergency department for influenza infections in children Impact of a rapid respiratory panel test on patient outcomes Evaluation of a rapid antigen detection point-of-care test for respiratory syncytial virus and influenza in a pediatric hospitalized population in the Netherlands