key: cord-353596-8iqjugcx authors: Bédubourg, Gabriel; Le Strat, Yann title: Evaluation and comparison of statistical methods for early temporal detection of outbreaks: A simulation-based study date: 2017-07-17 journal: PLoS One DOI: 10.1371/journal.pone.0181227 sha: doc_id: 353596 cord_uid: 8iqjugcx The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F(1)-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. Public health surveillance is the ongoing, systematic collection, analysis, interpretation, and dissemination of data for use in public health action to reduce morbidity and mortality of health-related events and to improve health [1] . One of the objectives of health surveillance is outbreak detection, which is crucial to enabling rapid investigation and implementation of control measures [2] . The threat of bioterrorism has stimulated interest in improving health surveillance systems for early detection of outbreaks [3, 4] reemergence of infectious diseases such as Middle East Respiratory Syndrome due to New Coronavirus (MERS-CoV) in 2012 [5] or Ebola in West Africa in 2014 [6] . Nowadays, a large number of surveillance systems are computer-supported. The computer support and statistical alarms are intended to improve outbreak detection for traditional or syndromic surveillance [7, 8] . These systems routinely monitor a large amount of data, recorded as time series of counts in a given geographic area for a given population. They produce statistical alarms that need to be confirmed by an epidemiologist, who determines if further investigation is needed. One limitation of these detection systems is an occasional lack of specificity, leading to false alarms that can overwhelm the epidemiologist with verification tasks [9, 10] . It is thus important to implement statistical methods that offer a good balance between sensitivity and specificity in order to detect a large majority of outbreaks without generating too many false positive alarms. In the literature, a broad range of statistical methods has been proposed to detect outbreaks from surveillance data. The main statistical approaches have been reviewed by Shmueli et al. [11] and Unkel et al. [12] . By restricting these reviews to the methods that allow temporal detection of outbreaks without integrating the spatial distribution of cases, the general principle is to identify a time interval in which the observed number of cases of an event under surveillance (i.e. the number of reported cases) is significantly higher than expected. This identification is mainly based on a two-step process: First, an expected number of cases of the event of interest for the current time unit (generally a week or a day) is estimated and then compared to the observed value by a statistical test. A statistical alarm is triggered if the observed value is significantly different from the expected value. The main difference between statistical methods lies in how the expected value is estimated, which is most often done using statistical process control or regression techniques or combination of both [12] . A major constraint to the practical implementation of these methods is their capacity to be run on an increasing number of time series, provided by multiple sources of information, and centralized in large databases [3, 13, 14] . Monitoring a large number of polymorphic time series requires flexible statistical methods to deal with several well-known characteristics observed in time series: the frequency and variance of the number of cases, secular trend and one or more seasonality terms [14] . Even if some authors proposed to classify time series into a small number of categories and sought suitable algorithms for each category, in this automated and prospective framework, statistical methods cannot easily be fine tuned by choosing the most appropriate parameters adapted to each time series in an operational way, as explained by Farrington et al. [15] . A key question for public health practitioners is what method(s) can be adopted to detect the effects of unusual events on the data. Some authors have proposed a systematic assessment of the performances of certain methods in order to choose one reference algorithm [16] [17] [18] [19] [20] . They assessed these methods on a real dataset [16, 21] , a simulated dataset [18-20, 22, 23] or on real time series for which simulated outbreaks were added [24, 25] . Simulating data offers the advantage of knowing the exact occurrence of the simulated outbreaks and their characteristics (amplitude, etc.). For example, Lotze et al. developed a simulated dataset of time series and outbreak signatures [26] . In the same way, Noufaily et al. [9] proposed a thorough simulation study to improve the Farrington algorithm [15] . Guillou et al. [27] compared the performance of their own algorithm to that of the improved Farrington, using the same simulated dataset. This dataset was also used by Salmon et al. to assess their method [28] . To our knowledge, no study has been proposed to thoroughly evaluate and compare the performance of a broad range of methods on a large simulated dataset. The objective of this paper is to evaluate the performance of 21 statistical methods applied to large simulated datasets for outbreak detection in weekly health surveillance. The simulated dataset is presented in Section 2. The 21 evaluated methods and performance measures are described in Section 3. Evaluations and comparisons are presented in Section 4. A discussion follows in the last section. We simulated data following the approach proposed by Noufaily et al. [9] . First, simulated baseline data (i.e. time series of counts in the absence of outbreaks) were generated from a negative binomial model of mean μ and variance ϕμ, ϕ being the dispersion parameter !1. The mean at time t, μ(t), depends on a trend and seasonality modeled using Fourier terms: Time series were simulated from 42 parameter combinations (called scenarios and presented in Table 1 in [9] ) with different values taken by θ, β, γ 1 , γ 2 , m and ϕ, respectively associated with the baseline frequency of counts, trend, seasonality (no seasonality: m = 0, annual seasonality: m = 1, biannual seasonality: m = 2) and the dispersion parameter. For each scenario, 100 replicates of the baseline data (time series with 624 weeks) were generated. We thus obtained 42 × 100 = 4200 simulated time series. The last 49 weeks of each time series were named current weeks. The evaluated algorithms were run on these most recent 49 weeks. Performance measures described below were computed based on detection during these 49 weeks. Secondly, for each time series, five outbreaks were simulated. Four outbreaks were generated in baseline weeks. Each outbreak started at a randomly drawn week and we generated the outbreak size (i.e. the number of outbreak cases) as Poisson with mean equal to a constant k 1 times the standard deviation of the counts observed at the starting week. The fifth outbreak was generated in the current weeks in the same manner, using another constant noted k 2 . We chose the values of k 1 to be 0, 2, 3, 5 and 10 in baseline weeks and k 2 from 1 to 10 in current weeks as in [9] . Finally, outbreak cases were randomly distributed according to a lognormal distribution with mean 0 and standard deviation 0.5. A total of 231,000 time series were generated from the 42 scenarios: 21,000 time series during the first step of simulation process (42 × 100 duplicates × 5 values for k 1 ), and 210,000 time series during the second step of simulation process (21,000 × 10 values for k 2 ), leading to a large simulated dataset including a great variety of time series, as observed in real surveillance data. At the end of the simulation process, 10,290,000 current weeks were generated, among which 6.2% were classified as outbreak weeks as they were included in an outbreak. We studied 21 statistical methods, 19 of which were implemented in the R package surveillance [29, 30] : • CUSUM variants: original CUSUM [29, 32] , a Rossi approximate CUSUM [32] , a CUSUM algorithm for which the expected values are estimated by a GLM model [29] , a mixed Rossi approximate CUSUM GLM algorithm [29] , • the original Farrington algorithm [15] and the improved Farrington algorithm [9] , • a count data regression chart (GLRNB) [29, 33] and a Poisson regression chart (GLR Poisson) [29, 34] , • the OutbreakP method [35] , • EARS C1, C2 and C3 algorithms [19, 36] For all simulated time series, we used the tuning parameters recommended by their authors for each algorithm when available and proposed by default in the package surveillance. The commands used from the R package surveillance and the control tuning parameters chosen for these 19 algorithms are presented in Table 1 . We also proposed two additional methods not implemented in the package surveillance: • a periodic Poisson regression where μ(t) is defined as in Eq (1). The threshold is the 1 − α quantile of a Poisson distribution with mean equal to the predicted value at week t. • a periodic negative binomial regression, also defined as in Eq (1), where the threshold is the 1 − α quantile of a negative binomial distribution with mean equal to the predicted value at week t and a dispersion parameter estimated by the model. RKI 3 algo.rki3() - [29] GLR Negative Binomial algo.glrnb() ARL = 5, dir = "inc" [29, 33] GLR Poisson algo.glrpois() ARL = 5, dir = "inc" [29, 34] EARS C1 earsC() method = "C1", α 1 [19, 36] EARS C2 earsC() method = "C2", α 1 [19, 36] EARS C3 earsC() method = "C3", α 1 [19, 36] OutbreakP algo.outbreakP() K = 100, ret = c("value") [35] These last two models were run on all the historical data. An alarm was triggered if the observed number of cases was greater than the upper limit of the prediction interval. These two methods are basic periodic regressions. The R code of these two algorithms is presented in the S24 Appendix. We evaluated the performances of the methods with three different α values: α = 0.001, α = 0.01 and α = 0.05. We considered eight measures to assess the performance of the methods: • Measure 1 is false positive rate (FPR). For each method and each scenario, we calculated the FPR defined as the proportion of weeks corresponding to an alarm in the absence of an outbreak, as in [9] . Nominal FPRs were 0.0005 for analyses with α = 0.001, 0.005 for analyses with α = 0.01 or 0.025 for analyses with α = 0.05. • Measure 2 is probability of detection (POD). For each scenario and for each current week period, if an alarm is generated at least once between the start and the end of an outbreak, the outbreak is considered to be detected [9] . POD is an event-based sensitivity (i.e. the entire outbreak interval is counted as a single observation for the sensitivity measurement) and is thus the proportion of outbreaks detected in 100 replicates. • Measure 3 is probability of detection during the first week (POD1week), which makes it possible to evaluate the methods' ability to enable early control measures. • Measure 4 is observation-based sensitivity (Se): Outbreak weeks associated with an alarm were defined as True Positive (TP), non-outbreak weeks without alarm as True Negative (TN), outbreak weeks without alarm as False Negative (FN) and non-outbreak weeks with alarm as False Positive (FP). Thus, Se = TP/(TP+FN). • Measure 5 is specificity (Sp) defined as Sp = TN/(TN+FP). Unlike FPR which was calculated on current weeks without any simulated outbreak, specificity was calculated on the entire number of current weeks out of the 210 000 time series including current outbreaks. • Measure 6 is positive predictive value (PPV) defined as: PPV = TP/(TP+FP). • Measure 7 is negative predictive value (NPV) defined as: NPV = TN/(TN+FN). • Measure 8 is F 1 -measure defined as the harmonic mean of the sensitivity and the PPV: In the result section, we proposed to calculate averaged performance measures, i.e. to calculate FPR on the overall 21,000 time series without outbreak during the current weeks, and to calculate the other performance measures on the overall 210,000 time series with simulated outbreaks during the current weeks. FPR was estimated prior to the simulation of current outbreaks, i.e. among the 49 current weeks for 21,000 (5 × 4,200) time series. Other indicators (POD, POD1week, Se, Sp, PPV, NPV) were estimated once outbreaks had been simulated, i.e. on the current weeks of all the time series (210,000 time series). For each α value, we proposed ROC curve-like representation of these results with four plots representing sensitivity according to 1-specificity, POD and POD1week as functions of FPR, and sensitivity according to PPV. To identify the factors associated with the performance measures for α = 0.01 and assess the strength of associations, multivariate Poisson regression models [38] were run, as in Barboza et al. [39] or Buckeridge et al. [40] . A set of covariates corresponding to the characteristics of the simulated time series was included: trend (yes/no), seasonality (no/annual/biannual), the baseline frequency coefficient θ, the dispersion coefficient ϕ and k 1 representing the amplitude and duration of past outbreaks. The last three covariates and k 2 were treated as continuous and modeled using fractional polynomials. The statistical methods were introduced as covariates to estimate performance ratios, i.e. the ratios of performances of two methods, adjusted for the characteristics of the time series represented by the other covariates. Adjusted FPR, POD, POD1week, sensitivity, and specificity ratios were estimated with the improved Farrington algorithm as reference. 95% confidence intervals were calculated with robust estimation of standard errors. For each continuous covariate modeled by fractional polynomials, ratios were presented for each value [41] . The simulation study, the implementation of the detection methods, and the estimations of performance were carried out using R (version 3.2.2), in particular using the package surveillance. Poisson regression models used to identify the factors associated with the performance measures and to assess the strength of associations were run using Stata 14. In this section, we present the averaged performances of each evaluated method, i.e. the performances irrespective of the scenario and of the characteristics of the time series. Table 2 presents averaged FPR, specificity, POD, POD1week, sensitivity, negative predictive value, positive predictive value and F 1 -measure for all 42 scenarios and all past and current outbreak amplitude and duration and for α = 0.01. Overall, FPR ranged from 0.7% to 59.9% and POD from 43.3% to 88.7%. Methods with the highest specificity, such as the improved Farrington method or the periodic negative binomial regression, presented a POD lower than 45% and a sensitivity lower than 21%. Averaged measures for α = 0.001 and α = 0.05 are presented in S1 Table and S2 Table. RKI 1-3, GLR Negative Binomial, GLR Poisson, Bayes 1-3 and OutbreakP algorithms' performances do not vary with α values (see Table 1 ). Their performances are only reported in Table 2 . For each method, a radar chart presenting the measures 1-7 for α = 0.01 is proposed in the S23 Appendix. . Two groups stand out from the rest. The first group consists of Bayes 1, 2 and 3. These methods present the best POD (around 0.8) and POD1week with a FPR around 10%. The second group consists of the 4 CUSUM methods: CUSUM, CUSUM Rossi, CUSUM GLM, and CUSUM GLM Rossi. For α = 0.01, these methods present the best sensitivity (around 0.80) but the lowest specificity (0.55) and the highest FPR (0.40). Note that while of the algorithm test statistics are based on the likelihood of single-week observations independent of recent ones, CuSUMs are not, and they may be important for applications where detection of gradual events rather than one-week spikes is especially critical. The OutbreakP method had the lowest specificity without having a better POD or POD1week than the first two groups. Finally, a third group consists of the other methods that had good specificity (over 0.9) but a lower sensitivity, POD and POD1week than the first two groups. All 21 methods presented a high negative predictive value, greater than 94%. The PPV of OutbreakP is very low (6.5%), while the Periodic Negative Binomial GLM method had the highest PPV (68.4%). A first attempt to visualize certain differences is to plot POD and FPR according to the scenario and the k 1 or k 2 values. To illustrate this, Fig 2 shows the performances of the CDC method. The first row represents FPR for an increasing past outbreak constant k 1 = 0, 2, 3, 5 and 10 according to the 42 scenarios. The second row shows POD according to k 2 for the 42 scenarios (each curve corresponds to a simulated scenario) for an increasing past outbreak constant k 1 = 0, 2, 3, 5 and 10. It clearly shows that performance depends on the scenario. The same plots with tables presenting numerical values for each method and different α values are presented in the S2 Appendix to S22 Appendix. To better compare the 21 methods, we presented on a single display in the S1 Appendix, their FPR according to the scenarios and their POD according to the k 2 values for k 1 = 5 and α = 0.01. To better understand which characteristics are associated with each performance and to compare each method with the improved Farrington method, we present the results obtained from the multivariate Poisson regression models in the next section. Adjusted performance ratios and associated factors Table 3 presents the adjusted performance ratios for performance measures 1 to 5 as described in the Methods' section (α = 0.01 for Improved Farrington, Original Farrington, Periodic Poisson GLM and Neg Binomial GLM, CDC and EARS C1-C3. α = 0.05 for Bayes 1-3). Evaluation and comparison of statistical methods for early temporal detection of outbreaks • Adjusted FPR ratios decreased when the amplitude and duration (driven by k 1 in Eq (1)) of past outbreaks increased. It is indeed more difficult to detect an outbreak when past outbreaks have occurred, especially when these outbreaks are large and when the method does not under-weight their influence to estimate the expected number of cases. Adjusted FPR ratio was 2.75 times higher for time series with a secular trend than for the others. As we simulated time series with a non-negative trend (β ! 0 in Eq (1)), it was expected that FPR would decrease with a trend, especially for methods which do not integrate a trend in the estimation of the expected number of cases. In the same way, annual seasonality-and biannual seasonality to an even greater extent-and overdispersion increased FPR. We observed a nonlinear relation between FPR and baseline frequency: FPR ratio increased from the lowest frequencies to 12 cases per week, then decreased for the highest frequencies, with no clear explanation. Only periodic negative binomial GLM presented a FPR lower than improved Farrington FPR (FPR ratio = 0.71). Adjusted FPR ratios of OutbreakP and all CUSUM variants were higher than 40. Another group of methods all presented FPR ratios below 10: CDC, RKI variants, EARS methods, periodic Poisson GLM, original Farrington, Bayes 2 and GLR negative binomial. FPR ratios for other methods (Bayes 1 and 3, and GLR Poisson) were between 10 and 17. • Adjusted specificity ratios were almost all equal to 1 as the amplitude and duration of past outbreaks had little influence on specificity. They were significantly lower for time series with a secular trend (adjusted specificity ratio = 0.84) or with annual or biannual seasonality (respective ratios: 0.99 and 0.98). Specificity decreased when dispersion increased but increased when the baseline frequency (θ in Eq (1)) increased. Only the periodic negative binomial GLM presented a specificity as good as that of the improved Farrington method (specificity ratio = 1.00). • The adjusted POD ratios significantly decreased when past outbreak amplitude and duration (k 1 ) increased, which is logical. They increased when current outbreak amplitude and duration (k 2 ) increased, which is also normal. POD was higher for time series with secular trends which can be explained by the positive trend. POD decreased when there was an annual or a biannual seasonality (respective POD ratio = 0.97 and 0.92). Only the highest dispersion value (θ = 5) had an influence on POD (adjusted POD ratio = 1.09). Bayes 1, 2 and 3, CUSUM variants and the GLR Poisson method presented the highest POD ratios, from 1.75 (GLR Poisson) to 1.95 (CUSUM GLM). Any method was less able to detect an outbreak than the improved Farrington algorithm. • POD1week presented results that were similar to those of POD. Adjusted POD1week ratios were significantly lower than those of POD for EARS C3 (0.25 versus 1.25), for CDC (0.55 versus 1.04) and for GLR negative binomial (1.17 versus 0.87). Other methods presented ratios for POD1week that were similar to or greater than those of POD. • Finally, similar results were observed for sensitivity and for POD. Bayes 2 and 3 methods, OutbreakP, RKI 3, CUSUM variants and the GLR Poisson method presented the highest sensitivity ratios, from 2.04 (RKI 3) to 3.89 (CUSUM GLM). As observed in the POD model, any method was less able to detect an outbreak than the improved Farrington algorithm. Estimation from the multivariate regression models to explain PPV and NPV are presented in S3 Table. We presented a systematic assessment of the performance of 21 outbreak detection algorithms using a simulated dataset. One advantage of a simulation study for outbreak detection methods benchmarking is the a priori knowledge of the occurrence of outbreaks, which enables the developpment of a real "gold standard". Some authors have already proposed that simulation studies be used to assess outbreak detection methods [18, 19, 23] , and others have suggested adding simulated outbreaks to real surveillance data baselines [16, 24, 25] , but without proposing a systematic assessment of the performance of a broad range of outbreak detection methods. Choi et al. [20] proposed such a study design based on the daily simulation method proposed by Hutwagner et al. [18] but do not study the influence of past outbreaks or time series characteristics (frequency, variance, secular trends, seasonalities, etc.), on methods performance. The simulated dataset we used to perform our study is large enough to include the considerable diversity of time series observed in real surveillance systems. We also simulated a high diversity of outbreaks in terms of amplitude and duration. In our opinion, this simulated dataset presents a high representativeness of real weekly surveillance data. To extend our results to daily surveillance data, it should be necessary to perform a similar study with daily surveillance data. These characteristics of the simulated dataset enabled us to propose simple intrinsic performance indicator estimations such as FPR and POD and sensitivity and specificity to compare the performance of the evaluated methods. Furthermore, this allows us to compare our results to other studies based on the same dataset. Negative predictive value and positive predictive value are proposed as operational indicators for decision making when an alarm is triggered, or not triggered, by an algorithm. A benefit of the addition of outbreaks to the baseline weeks is that outlier removal strategies considered by many authors may be objectively tested and evaluated. One limitation in the simulation process was the fact that only increasing secular trends were used. Increasing secular trends would facilitate outbreak detection, while decreasing trends would hamper it. Furthermore, our study was designed based on weekly surveillance, while syndromic surveillance systems are most often daily systems. In daily surveillance time series, other seasonalities such as the "day of the week" effect need to be taken into account, which is not the case in our study. The performance of the evaluated methods was only considered from a general perspective, in order to detect outbreaks in a large number of polymorphic weekly-based time series. In a pragmatic approach, it seems very difficult to adapt the tuning parameters of these methods for every time series. In France, public health agencies, such as the French National Public Health Agency (Santé publique France), the French Agency for Food, Environmental and Occupational Health Safety (Anses) and the French Armed Forces Center for Epidemiology and Public Health (CESPA) have deployed computer-supported outbreak detection systems in traditional or syndromic surveillance contexts [42] [43] [44] [45] . They monitor a broad range of time series on a daily or weekly basis without, however, having rigorously evaluated the algorithms implemented. In the same way, the performance of the methods varied according to different baseline profiles depending on trend, seasonality, baseline frequency and overdispersion. Even if similar meta-models were already proposed by Buckeridge et al. for example [40] , an original approach was to compare performance indicators adjusted for these parameters in a regression model. As expected, the adjusted performance of the 21 methods was penalized by increasing amplitude and duration in past outbreaks and by annual or biannual seasonality. Conversely, performance was better for increasing amplitude and duration in current outbreaks to be detected. More generally, the methods' performance was highly dependent on simulation tuning parameters. We proposed various measures to monitor the performance of outbreak detection methods. False positive rate (FPR) and probability of detection (POD) were proposed by Noufaily et al. [9] . We proposed an observation-based sensitivity measure and an event based sensitivity (POD). The concept of sensitivity based on alerting in each observation period is not applicable in some applications because signals of interest are intermittent and multimodal and may even be interpreted as multiple events. Many of the algorithms are based on the likelihood of single-week observations independent of recent ones, but CUSUMs are not, and the large sensitivity advantage in the CUSUMs methods, diminished for POD and POD1week, may be a result of the way the outbreak effects are modeled. By contrast, the implementation of the POD measure is uniformly applicable. Public health response to an outbreak depends on its early detection. In the POD definition, an outbreak was considered to be detected even if the first statistical alarm was issued during its last week. With the aim of estimating early detection performance, we also proposed POD during the first week, which cannot be considered alone, because even if it is done belatedly, an outbreak needs to be detected by the methods. While POD1week was an indicator of a method's ability to detect an outbreak early, we did not propose any measure of timeliness like Salmon et al. [28] or Jiang et al. [45] . This topic could be further explored in another study. To give some insight on the speed of detection, we calculated it for the Improved Farrington algorithm and the CUSUM GLM Rossi algorithm. On average, on the overall dataset, it took 1.23 weeks for the Improved Farrincton method to detect an outbreak or 1.16 weeks for the CUSUM GLM Rossi method. No method presented outbreak detection performances sufficient enough to provide reliable monitoring for a large surveillance system. Methods which provide high specificity or FPR, such as the improved Farrington or CDC algorithms, are not sensitive enough to detect the majority of outbreaks. These two algorithms could be implemented in systems that monitor health events to detect the largest outbreaks with the highest specificity. Conversely, methods with the highest sensitivity and able to detect the majority of outbreaks-Bayes 3 or CUSUM GLM Rossi for example-produced an excessive number of false alarms, which could saturate a surveillance system and overhelm an epidemiologist in charge of outbreak investigations. As a screening test in clinical activity, the aim of an early outbreak detection method is to identify the largest possible number of outbreaks without producing too many false alarms. The performances presented in this paper should be interpreted with caution as they depend both on tuning parameters and on the current implementation of the methods in the R packages. Packages evolve with time and their default parameters may also change. So this work based on R available packages, may be viewed as a starting point for researchers to enhance the comparison of methods and/or to optimize the tuning according to their data. Since no single algorithm presented sufficient performance for all scenarios, combinations of methods must be investigated to achieve predefined minimum performance. Other performance criteria should be proposed in order to improve the choice of algorithms to be implemented in surveillance systems. Therefore, we suggest that a study of the detection period between the first week of an outbreak and the first triggered alarm be conducted. Table. FPR, specificity, POD, POD1week, sensitivity, negative predictive value, positive predictive value and F 1 -measure for 12 evaluated methods and α = 0.001 (for past outbreak constant k 1 = 0, 2, 3, 5, 10 and current outbreak k 2 = 1 to 10 for POD and sensitivity). (PDF) S2 Table. FPR, specificity, POD, POD1week, sensitivity, negative predictive value, positive predictive value and F 1 -measure for 15 evaluated methods and α = 0.05 (for past outbreak constant k 1 = 0, 2, 3, 5, 10 and current outbreak k 2 = 1 to 10 for POD and sensitivity). (PDF) S3 Table. Other performance ratios, adjusted on past and current outbreak duration and amplitude, trend, seasonality, dispersion and baseline frequency (α = 0.01 for Improved Framework for evaluating public health surveillance systems for early detection of outbreaks: recommendations from the CDC Working Group. MMWR Recommendations and reports: Morbidity and mortality weekly report Recommendations and reports / Centers for Disease Control The emerging science of very early detection of disease outbreaks Statistical issues and challenges associated with rapid detection of bio-terrorist attacks Outbreak detection through automated surveillance: a review of the determinants of detection Isolation of a novel coronavirus from a man with pneumonia in Saudi Arabia The next epidemic-lessons from Ebola Practical usage of computer-supported outbreak detection in five European countries A system for automated outbreak detection of communicable diseases in Germany An improved algorithm for outbreak detection in multiple surveillance systems Public health monitoring tools for multiple data streams. MMWR Morbidity and mortality weekly report Statistical Challenges Facing Early Outbreak Detection in Biosurveillance Statistical methods for the prospective detection of infectious disease outbreaks: a review Automated biosurveillance data from England and Wales A Statistical Algorithm for the Early Detection of Outbreaks of Infectious Disease An evaluation and comparison of three commonly used statistical models for automatic detection of outbreaks in epidemiological data of communicable diseases Assessing surveillance using sensitivity, specificity and timeliness Comparing aberration detection methods with simulated data Comparing syndromic surveillance detection methods: EARS' versus a CUSUM-based methodology Comparison of various statistical methods for detecting disease outbreaks Statistical algorithms for early detection of the annual influenza peak season in Hong Kong using sentinel surveillance data A simulation model for assessing aberration detection methods used in public health surveillance for systems with limited baselines Evaluation of a method for detecting aberrations in public health surveillance data Comparing early outbreak detection algorithms based on their optimized parameter values A simulation study comparing aberration detection algorithms for syndromic surveillance. BMC medical informatics and decision making Yahav I Simulating Multivariate Syndromic Time Series and Outbreak Signatures, Social Science Research Network An extreme value theory approach for the early detection of time clusters. A simulation-based assessment and an illustration to the surveillance of Salmonella Bayesian outbreak detection in the presence of reporting delays surveillance: An R package for the monitoring of infectious diseases surveillance: Temporal and Spatio-Temporal Modeling and Monitoring of Epidemic Phenomena Detection of aberrations in the occurrence of notifiable diseases surveillance data An approximate CUSUM procedure for surveillance of health events Count data regression charts for the monitoring of surveillance time series Discussion paper // Sonderforschungsbereich 386 der Ludwig-Maximilians-Universität München Robust outbreak surveillance of epidemics in Sweden The bioterrorism preparedness and response Early Aberration Reporting System (EARS) Agreement, the F-measure, and reliability in information retrieval A modified poisson regression approach to prospective studies with binary data Factors influencing performance of internet-based biosurveillance systems used in epidemic intelligence for early detection of infectious diseases outbreaks Predicting Outbreak Detection in Public Health Surveillance: Quantitative Analysis to Enable Evidence-Based Method Selection The use of fractional polynomials to model continuous risk variables in epidemiology Automated early warning system for the surveillance of Salmonella isolated in the agro-food chain in France Ten years experience of syndromic surveillance for civil and military public health Value of syndromic surveillance within the Armed Forces for early warning during a dengue fever outbreak in French Guiana in 2006. BMC medical informatics and decision making Generalized AMOC curves for evaluation and improvement of event surveillance The authors would like to thank Angela Noufaily and Paddy Farrington for providing them with simulated datasets and an R code to simulate outbreaks. Conceptualization: Gabriel Bédubourg, Yann Le Strat.