key: cord-0630785-8d05akpn authors: Gordeev, Dmitry; Singer, Philipp; Michailidis, Marios; Muller, Mathias; Ambati, SriSatish title: Backtesting the predictability of COVID-19 date: 2020-07-22 journal: nan DOI: nan sha: 08e7d171537078e00c356ca87f9910cd953171b4 doc_id: 630785 cord_uid: 8d05akpn The advent of the COVID-19 pandemic has instigated unprecedented changes in many countries around the globe, putting a significant burden on the health sectors, affecting the macro economic conditions, and altering social interactions amongst the population. In response, the academic community has produced multiple forecasting models, approaches and algorithms to best predict the different indicators of COVID-19, such as the number of confirmed infected cases. Yet, researchers had little to no historical information about the pandemic at their disposal in order to inform their forecasting methods. Our work studies the predictive performance of models at various stages of the pandemic to better understand their fundamental uncertainty and the impact of data availability on such forecasts. We use historical data of COVID-19 infections from 253 regions from the period of 22nd January 2020 until 22nd June 2020 to predict, through a rolling window backtesting framework, the cumulative number of infected cases for the next 7 and 28 days. We implement three simple models to track the root mean squared logarithmic error in this 6-month span, a baseline model that always predicts the last known value of the cumulative confirmed cases, a power growth model and an epidemiological model called SEIRD. Prediction errors are substantially higher in early stages of the pandemic, resulting from limited data. Throughout the course of the pandemic, errors regress slowly, but steadily. The more confirmed cases a country exhibits at any point in time, the lower the error in forecasting future confirmed cases. We emphasize the significance of having a rigorous backtesting framework to accurately assess the predictive power of such models at any point in time during the outbreak which in turn can be used to assign the right level of certainty to these forecasts and facilitate better planning. In the event of a pandemic outbreak, stakeholders such as politicians, pharmaceutical companies or hospitals attempt to forecast the spread of the pandemic to make informed decisions about actions and policies such as lock-downs, supply chain optimization, or, in worst case, even crucial decisions about intensive care units. However, every pandemic is unique in itself and COVID-19 reached a magnitude and severity that has not been observed over the last decades [17, 21, 52] . As a result, little historical information about similar pandemics was at our disposal at the beginning of the outbreak in order to make good estimates about the future development of the disease. This information bottleneck leads to uncertainty in forecasting methods and can be crucial in the efforts to develop new medicine, vaccines, public guidelines and other important aspects to guarantee public health and safety. Background. Substantial research has been published over the course of the pandemic, as evident from the COVID- 19 Open Research Dataset (CORD-19) [1] containing close to 30,000 COVID-19 related research papers; the dataset has been extended to cover publications from similar corona-viruses and fostered NLP-related research on the corpus [4, 28] . In Figure 1 , we depict the number of research publications containing the term "Covid" in the title and having a publication date 2 as well as the weekly number of new cases retrieved from data made available from the John's Hopkins University (see Section 2) . Similar to the rapid spread of the pandemic, we observe an accelerating number of publications indicating the strong efforts of the research community to study the pandemic across disciplines. Many different types of models have been proposed to model and forecast the number of infections within and across countries. A prominent and frequently applied type is the classical epidemiological framework [32] modeling susceptible, exposed, infected, and recovered agents (SEIR) that has also found its application in several COVID-19 forecasting approaches [13, 14, 26, 27, 35, 36, 48] . A second type of category represents autoregressive moving average models that attempt to extrapolate future data by means of aggregating recent data. These types of models have had many successful implementations in time series forecasting-e.g. financial methods [16] -and have recently also been applied to predict COVID-19 numbers [15, 42, 45] . Third, several curve fitting and statistical models have been proposed to be well-tailored for COVID-19 forecasting, including power-law models [56] , simple linear or polynomial models [41, 54] , logistic models [49] , mixed-effects models [11, 19] , and many others. Finally, many approaches in the realm of machine learning have been developed [5] , including e.g. Facebook's prophet algorithm [40] , gradient boosted trees [47] , or neural networks [55] . This list only covers a small fraction of published models, an exemplary overview of others is also given in [3, 20, 33] . Kaggle, a large competitive data science platform with around five million users [25] , conducted a series of five competitions [6, 7, 8, 9, 10] allowing data scientists to develop and submit their COVID-19 forecasting models to predict confirmed cases and fatalities across ∼300 regions-including mostly country-level and in certain cases province-level or state-level predictions-for no less than 30 days into the future 3 . The models were always developed on historical data and then evaluated live over a period of four succeeding weeks or more. Across all competitions, different types of models have performed well including the above-mentioned machine learning models (boosting trees, neural networks) as well as a diverse set of curve fitting, statistical, and autoregressive models. The series of competitions captures the state of development of these kinds of models during a pandemic quite well, with the models being initially quite simple and uninformed [6, 7] , and developing to more robust models and ensembles over time [8, 9, 10] . While many strong solutions have been developed, it has also been shown that a lot of subjective adjustments [22, 37] can make a model shine or fail and that it is explicitly complex to forecast rapidly changing patterns. Objectives. As summarized, a plethora of research has been conducted in order to forecast the COVID-19 pandemic. However, given the rapidly changing environments, data irregularities as well as the inherent difficulty of predicting these numbers, this type of research has also been criticized due to the sensitivity of the topic at hand and the potential huge implications of poorly performing models [29, 30] . Wynants et al. [53] conducted a review of 66 published models with focus on predicting different aspects of COVID-19 or similar diseases including models for forecasting hospital admissions due to pneumonia, diagnostic models for detecting COVID-19 as well as prognostic models for assessing mortality risk, length of stay in the hospital and exacerbation of the disease. Their review rated the aforementioned models as of high or unclear risk and biased due to improper testing frameworks, with non-representative selection of control patients. They also highlighted the lack of clarity in the reporting of the findings and that these models have a high risk of overfitting. They concluded that a reporting guideline needs to be adhered from all works predicting COVID-19 or similar diseases to avoid unreliable predictions as the latter "could cause more harm than benefit in guiding clinical decisions". We still strongly believe in the fundamental value of these types of models, specifically for application in potential second waves or other future pandemics. In order to be able to utilize these types of models, they have to be properly evaluated and made transparent [18] . Nonetheless, most of these models have been developed during the outbreak of the pandemic, and thus, could only be evaluated on historical data up to that point. While some countries still see rising numbers in COVID-19 infections as of this writing 4 (e.g., Brazil, India or the US), most countries are well past the peak and see rapid flattening of the curves. However a few potential instances of 2nd waves may already be happening [50] . Consequently, we are now in the unique position to backtest and investigate predictive performance of COVID-19 forecasting models across countries at different points in time. This not only allows us to study the fundamental prediction difficulty of infection curves, but also measure predictive performance at various stages of the pandemic. Contributions and findings. To study these and similar questions, we make the following contributions: (i) We apply two simple, yet well-known and robust, short-term forecasting methods along with a baseline to predict confirmed COVID-19 cases. (ii) We introduce a thorough backtesting framework that allows us to provide accurate assessments about a model's prediction performance. (iii) We utilize this framework to empirically study the general predictability of COVID-19 across various stages of the outbreak. Our work highlights the importance of proper testing, tracking and quantifying of the prediction error through time as well as through different levels of accumulated infected cases. We observe that the prediction error is substantially higher in the early stages of the pandemic, when the number of confirmed cases is still low and the trends are still undeveloped. Then follows a period of approximately 15 days (past the early days of March) where the error drop significantly by about 3.5 times. From that point on it regresses steadily to lower levels as more data becomes available. This paper is organized as follows. Section 2 describes the source of the data used as well as the methods by which the latter was transformed and processed to underpin the experiments. A power growth model and a version of the SEIR model called SEIRD are optimized and applied via multiple moving windows in a backtesting setting across all countries to predict the confirmed infected cases of COVID-19 and track the prediction error over time. Section 3 describes the methodology supporting these models in terms of their parameters, optimization routines and loss functions minimized within the context of the backtesting framework. Section 4 highlights the conducted experiments and core findings. Ultimately, the conclusions of the experiments are drawn in Section 5. The primary source of data is the data repository of the Johns Hopkins University Centre for Systems Science and Engineering [2] . It contains daily updates about confirmed, deceased, and recovered cases at country level. Due to given irregularities in the way different countries report daily COVID-19 statistics, we employ a basic data cleaning routine. As we are working on a cumulative level of confirmed cases, we aim at guaranteeing the monotonicity requirement. The corrective measure to ensure monotonicity is applied when the value between two consecutive dates remains the same and then increases with a high pace. In this case, the latter of the two is replaced with a linear interpolation of the neighbor values (i.e., the average between the previous value and the next value). The reasoning behind this measure is that often the cases remain the same due to irregularities and delays of the reporting system [31, 43] . An example of applying this transformation is shown in Figure 2 , assuming the confirmed cumulative count is originally as depicted in the solid blue line, we transform it to the dashed blue line. Even though the overall expansion of a curve may not be linear in respect to time, finding a better method to correct the same-value irregularity is an exhaustive task and out of scope for this analysis. Overall, our dataset contains 253 regions, with daily statistics ranging from the 22nd January 2020 to the 22nd June 2020. We observe around 9.1 million confirmed cases and 472, 000 fatalities; see also Figure 1 for a visualization of the development over time. This section details the elements utilized to implement the experiments of tracking and understanding the error in predicting the COVID-19 confirmed infected cases over time across the globe at the country level. We start by elaborating our core backtesting methodology in Section 3.1 which we utilize to study the RMSLE loss function (see Section 3.2) over time. Within the scope of backtesting, we employ three models described in Section 3.3: a simple baseline model, a power growth model, and an extension of the well-known SEIR model, called SEIRD. Backesting-or the process of evaluating a model or an algorithm over different past periods of time-is commonly associated with trading strategies, banking, or risk prediction [51] . Backtesting in predictive modeling can be an important tool in finding the optimal parameters for the used models as well as for measuring the volatility of predictions through time. Assessment of the forecast accuracy can be dramatically biased if done on the same data used for model fitting [23] . A technique of setting up a single hold-out sample can serve as a way to derive more accurate forecast error estimations, however it does not provide the information about how the model accuracy improves over time, as more information becomes available. Moreover, an assessment based on a single sample is not robust given the limited size of the data available to fit the models. In the context of the COVID-19 pandemic, backtesting the models aimed to predict different aspects of the disease (in confirmed, deceased, or recovered cases) can facilitate understanding of the sensitivity of these models in producing accurate and robust results given varying sizes of training history. Such an approach can enable defining the time (or the amount of training history) required to produce results of certain accuracy levels, quite often essential in order to use them efficiently in decision systems. Using the EDI model, an exponentially decreasing intensity growth model, Moriconi [39] used backtesting on China's daily confirmed COVID-19 cases, starting from 13th February 2020, and observed substantial overestimation for (approximately) the first week of predictions before the model started being significantly more accurate. Volatility in predictions (via varying levels of over-and underestimation) were also observed in the work of Lesage [34] where the Hawkes process [24] was utilized to predict via backtesting the number of confirmed cases in both France and China in different periods of February and March. Rouabah et al. [44] used the SEIQRDP model-a variant of the SEIRD model that also incorporates quarantined (Q) individuals to be considered as active cases as well as the protected population (P) for cases that strictly follow the standard advised protection measures-in order to forecast the elements of that model for the next six months past the last training day of 24th May 2020 for Algeria. Their work emphasizes the threat of creating unstable models due to overfitting and underfitting, plus they point out that overfitting is a major issue in epidemic dynamical models due to the noise embedded in the data. The SEIQRDP model's parameters were optimized using a genetic algorithm, enhanced by published information for these parameters. To find the optimum number of iterations for the genetic algorithm to obtain the best parameters, a time-based cross validation procedure was applied in different countries so that the first n days for a given country's infected numbers are used to fit the algorithm and the last v are used for validation. This process was tested on the countries of Italy, Spain, Germany and South Korea before applying it to Algeria. The ratio of v/n can be adjusted based on the number of parameters that need optimization. In this case the ratio was about 1/4. The study highlights that there is an inverse relationship between the training sample's size and the number of iterations required in the genetic algorithm. As more data becomes available for a given country, the optimum number of iterations decreases. Therefore re-evaluating the optimization at different points in time is important for obtaining the most accurate results. With the application of backtesting, we can not only derive the accuracy of predictions made in the past, but also show how the accuracy changes during the pandemic. The main idea behind backtesting is to make the predictions by the model at a fixed time point t in the past and estimate the error at the time point t + H, where H is a predefined prediction horizon. In this paper, we focus on two values, H = 7 days for short-term predictions and H = 28 days for a longer forecast. Denote by X a p × N matrix, where p is number of regions, and N is number of days with available numbers of confirmed cases. Let us denote by X (t1)(t2) = X i,j where i = 1, ..., p and j = t 1 , ..., t 2 the matrix of observations available between days t 1 and t 2 and by X (t) = X i,t where i = 1, ..., p the values at the day t. The backtesting implies fitting a model f : where L f it is the loss function. We later assess the error of the forecast with horizon H as where L eval is the evaluation metric. The experiment results in two (H = 7 and H = 28) time series of errors per model f , showing the values of forecast error, the fluctuations of the error and its dynamics during the pandemic. The choice of the loss function was driven by the fact that most models predicting the development of number of confirmed infection cases assume exponential growth. In such a case, metrics like RMSE (root mean squared error) and MAE (mean absolute error), based on absolute difference in number of predicted and realized cases, tend to significantly penalize any exponential over-estimations. Therefore, root mean squared logarithmic error (RMSLE) was chosen as the evaluation metric L eval . where X and Y are matrices of the same size N × p. Also, consistent with [6, 7, 8, 9] , the loss function is applied to the cumulative number of cases. Next, we specify three models that we employ utilizing our backtesting framework to study the RMSLE error over time. (1) For reference, we utilize a simple parameter-free baseline model that predicts future confirmed cases by using the latest known data point, (2) we introduce a power-growth model employing constant growth that decays over time, and (3) we utilize a variation of the well-known epidemiological SEIR model called SEIRD. Baseline model. As a reference point for the evaluation metric across the pandemic development, a simple parameterfree baseline model was applied. Denoting by C t number of cumulative confirmed cases in a region at point in time t, the baseline predictions are The baseline model is not intended to produce reasonable forecasts, but rather to indicate how difficult it is to make accurate predictions at each point in time t. Power-growth model. This model is motivated by different types of COVID-19 forecasting models such as statistical power law models with exponential growth [38] , or autoregressive moving average models as described in Section 1. We have utilized this model successfully across all five Kaggle competitions [6, 7, 8, 9, 10] and the version presented in this paper is the final adaption of it. The main idea of the model is to forecast COVID-19 cases, by employing a constant growth rate that is derived from previous observations. This growth rate is decaying over time and the decay can accelerate. In detail, we can define the power-growth model as follows: In Equation 5 we want to predict the cumulative number of cases C at time t + i, i = 1, ..., H using the number of cases at time t; gr refers to the growth rate, gr d to the decay of the growth rate, and gr da to the acceleration of the decay. The growth rate is calculated for each region separately by taking an exponential weighted average of the observed daily growth rate over a certain number of past days (n days ). If a region does not exceed a minimum number of cases (min_cases), a default growth rate (gr def ) is employed. The growth rate decay gr d as well as its acceleration gr da are constant across all regions. All parameters except the growth rate are thus hyperparameters that are optimized based on a global metric across regions. The modified power-growth model fitting was performed the following way. Meaning that the most recent 21 days of data were used to optimize the hyperparameters. Loss function is the same as the evaluation metric L f it = L eval = RM SLE. SEIRD model. The SEIR model belongs to a family of epidemiological models (see also Section 1 that map the spread of an epidemic through the sequential interaction of 4 groups or states (represented as ordinary differential equations), the susceptible (or number of individuals that can contract the disease), exposed, infected and removed. Our implementation uses a variation of the SEIR model called SEIRD [12] . In this application, the removed category is further divided into recovered and deceased. The equations that map the rate of change in respect to the main states are displayed below: where ∂S ∂t represents the change applied to the susceptible population S at time t, β is the infection rate (or how many people an infected individual infects), I the infected population at time t and N the total population. where ∂E ∂t represents the change applied to the exposed population E at time t, δ is a parameter that controls the rate by which the exposed population transitions to the infected state and it can be interpreted as 1 divided by the incubation period (or in other words, the period that an individual is infected but asymptomatic and unable to spread the disease to others). Similarly, to compute the change applied to the infected group at time t, a parameter γ is used to represent the recovery rate, or how quickly individuals move to the recovered state. Equivalently, ρ controls how quickly individuals move to the deceased state. The parameter α represents the fatality rate or the proportion of the infected population that will transition to the deceased state. The (1 − α) represents the proportion of the infected population that will transition to the recovered state. which expresses the change applied to the recovered group R at time t. Finally, expresses the change applied to the deceased group D at time t. For each country, and given a set of bounds for the model's parameters (of N, β, δ, γ, α, ρ), a stochastic, populationbased optimisation algorithm with differential evolution [46] is applied to find the optimum values for these parameters in order to minimize the RMSE 5 across all the infected, recovered and deceased groups up to a selected point in time. The bounds-based optimization algorithm was preferred over others (like gradient-based), because the bounds were selected based on latest known information in regards to infection rate, incubation period, and fatality rate and it provided a fairly narrow constrained environment for the algorithm to converge more quickly. Where y is the observed value for either one of infected, recovered or deceased andŷ the corresponding predicted value for these groups. Then the overall metric to optimize can be defined as: Where M is the objective to minimize and connotes the average of rmse(I,Î), rmse(R,R), rmse(D,D) which are the respective root mean squared errors for infected, recovered and deceased. Once the optimum parameters N , β, δ, γ, α, ρ have been obtained, the curves for infected, recovered and deceased are extrapolated in time to match the forecasting period. Since the predicted values are based on the fit with the known values, it is possible that the cumulative predicted numbers are lower than the last known value. In that case, the differences between the predicted points are computed and added to the last known value to form the new cumulative predictions. (a) 7 days forecast horizon (b) 28 days forecast horizon Our experiments are based on the data spanning the period from the 22nd January (d = 1) until the 22nd June 2020 (d = 153), counting N = 153 days of data points from each of p = 253 regions. In order to provide at least a month of data for training the models, backtesting results are reported from d = 31 onwards. Two prediction horizons were chosen for the experiments: H = 7 and H = 28. Many regions report new cases with weekly cycle, where lower cases are reported during the weekend, therefore, horizons over full weeks are suggested to avoid instability. We make the backtesting framework as well as further code to run the experiments available online 6 . The first experiment aggregates the ERR H (t) by the date t in order to show how forecasting error develops over the course of the pandemic. We show respective results for both forecasting horizons in Figure 3 . A first observation is that it is easier to capture short-term trends, compared to long-term trends, as evident from smaller absolute prediction errors across all models for the 7-days forecasts in Figure 3a compared to the 28-days forecasts in Figure 3b . Both the power-growth and SEIRD model perform better than the simple baseline for most parts of the curve, which is why we focus on them next. We observe a steady trend of prediction error decreasing together with error of a baseline model. We see that in the beginning of the outbreak in early March, both models depict high errors in predicting confirmed infected cases which are near the levels of 1.3 and 5 respectively for the 7-days and 28-days forecast horizons. Over time, the models' errors move down elastically and reach 0.4 and 1.5 respectively for the two forecast horizons. In other words, the error gets reduced about three times by middle-to-end of March, which is roughly 15 extra days of observed data. From that point on the errors regress more inelastically through time and gradually reach 0.1-0.2 and 0.5 for the two forecast periods in June. Given the decreasing error over time, we are now interested in studying the effect of historical data volume on prediction errors. To that end, our second experiment visualized in Figure 4 , contrasts how the error depends on the accumulated number of confirmed cases. The evaluation metric was aggregated by C t -number of confirmed cases at the date when the forecast was made. We focus on a forecasting horizon of 28 days, but can observe similar trends for the 7-days forecast horizon. We can clearly see, that the error decreases with more training data available. SEIRD is even performing worse than the baseline with very limited number of recorded cases. As soon as a region reaches 1000 confirmed cases, we observe that the forecast accuracy is securely below the baseline and monotonically decreasing. The power-growth model shows constant improvement of the error with increasing C t . In this paper, we studied the predictive performance of COVID-19 forecasting models throughout the course of the pandemic. To that end, we examined the error (through RMSLE) for predicting COVID-19 confirmed cases across multiple countries around the globe through time and volume starting from 22nd January until 22nd June 2020. The error was investigated via applying three models, a simple baseline model, a power growth model, and the well-known epidemiological SEIRD model, under a rigorous back testing framework that required refitting the models' parameters every day on historical data up to this point and making predictions, covering the whole six-month period. We used 7-days and 28-days forecast horizons for our measurements. Our work highlights the importance of applying a rigorous backtesting framework to predicting the different stages of COVID-19. It is clearly demonstrated (and expected) that different time and volume can result in different error levels. In the early days of the outbreak, when the volume of observed cases is still low, the error is larger (with higher volatility) versus later stages when the curves have more developed shapes. Accurately depicting the error level can facilitate better usage of such epidemic models when they get integrated into decision systems as it can help the decision maker decide how much confidence to place in such models at different stages throughout the epidemic. It is imperative to understand whether the error level of an epidemic model is low enough at any given point in time to provide a useful or exploitable prediction as the cost the of the models' errors may result in more than financial losses. Cord-19 covid-19 open research dataset Covid-19 data repository by the center for systems science and engineering (csse) at johns hopkins university Covid-19 forecast hub Covid-19 open research dataset challenge Covid-19 projections using machine learning Covid19 global forecasting Covid19 global forecasting Covid19 global forecasting Covid19 global forecasting Covid19 global forecasting Infectious disease modelling: Beyond the basic sir model Epidemic analysis of covid-19 in brazil by a generalized seir model Arima modelling of predicting covid-19 infections. medRxiv Stock price prediction using the arima model The unprecedented stock market impact of covid-19 Call for transparency of covid-19 models Forecasting the impact of the first wave of the covid-19 pandemic on hospital demand and deaths for the usa and european economic area countries. medRxiv How simulation modelling can help reduce the impact of covid-19 Estimating the number of infections and the impact of non-pharmaceutical interventions on covid-19 in 11 european countries 1st place solution lgbm with some adjustments The elements of statistical learning Spectra of some self-exciting and mutually exciting point processes Kaggle milestone: 5 million registered users! A simple formulation of non-markovian seir Not all interventions are equal for the height of the second peak Artificial-intelligence tools aim to tame the coronavirus literature Forecasting for covid-19 has failed Caution warranted: using the institute for health metrics and evaluation model for predicting the course of the covid-19 pandemic Early dynamics of transmission and control of covid-19: a mathematical modelling study. The lancet infectious diseases Small world effect in an epidemiological model Leveraging data science to combat covid-19: A comprehensive review A Hawkes process to make aware people of the severity of COVID-19 outbreak: application to cases in France A conceptual model for the outbreak of coronavirus disease 2019 (covid-19) in wuhan, china with individual reaction and governmental action A modified seir model to predict the covid-19 outbreak in spain and italy: simulating control scenarios and multi-scale epidemics Some ml, a lot of judgement and luck A brief history of generative models for power law and lognormal distributions A model with exponentially decreasing intensity for covid-19 epidemic outbreak Analysis of the covid-19 pandemic by sir model and machine learning technics for forecasting Seir and regression model based covid-19 outbreak predictions in india Assessment of the outbreak risk, mapping and infestation behavior of covid-19: Application of the autoregressive and moving average (arma) and polynomial models. medRxiv Coronavirus pandemic (covid-19). Our World in Data Epidemic seiqrdp model using genetic fitting algorithm with cross-validation and application to early dynamics of covid-19 in algeria Prediction of the covid-19 pandemic for the top 15 affected countries: Advanced autoregressive integrated moving average (arima) model. JMIR public health and surveillance Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces Machine learning model estimating number of covid-19 infection cases over coming 24 days in every province of south korea (xgboost and multioutputregressor). medRxiv Estimation of the transmission risk of the 2019-ncov and its implication for public health interventions Covid-19 epidemic outcome predictions based on logistic fitting and estimation of its reliability Covid-19 in iran: round 2. The Lancet. Infectious Diseases A review of backtesting methods for evaluating value-at-risk The global impact of covid-19 and strategies for mitigation and suppression Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal Early estimation of the case fatality rate of covid-19 in mainland china: a data-driven analysis How well can we forecast the covid-19 pandemic with curve fitting and recurrent neural networks? medRxiv Fractal kinetics of covid-19 pandemic. medRxiv Acknowledgements. We want to thank Dr. Christof Henkel (kaggle.com/christofhenkel) for collaboration on Kaggle developing our proposed power-growth model.