key: cord-0475346-ux19us6g authors: Jersakova, Radka; Lomax, James; Hetherington, James; Lehmann, Brieuc; Nicholson, George; Briers, Mark; Holmes, Chris title: Bayesian imputation of COVID-19 positive test counts for nowcasting under reporting lag date: 2021-03-23 journal: nan DOI: nan sha: 899837ce819cf2cc0f7c00983b1bffbcc96cacf2 doc_id: 475346 cord_uid: ux19us6g Obtaining up to date information on the number of UK COVID-19 regional infections is hampered by the reporting lag in positive test results for people with COVID-19 symptoms. In the UK, for"Pillar 2"swab tests for those showing symptoms, it can take up to five days for results to be collated. We make use of the stability of the under reporting process over time to motivate a statistical temporal model that infers the final total count given the partial count information as it arrives. We adopt a Bayesian approach that provides for subjective priors on parameters and a hierarchical structure for an underlying latent intensity process for the infection counts. This results in a smoothed time-series representation now-casting the expected number of daily counts of positive tests with uncertainty bands that can be used to aid decision making. Inference is performed using sequential Monte Carlo. In light of COVID-19, the UK government tracks the number of lab-confirmed positive tests over time, primarily as a measure of progress against epidemic control 2 . Since it takes time for test results to be reported to their local authority, and subsequently centralised, there is uncertainty on the most recent positive test counts. This uncertainty diminishes over time until all tests for a particular day are eventually reported, whereafter the count remains unchanged. The time taken until the reported counts converge to a final value, here referred to as reporting lag, is around four days. News reports and publicly available summaries of the positive tests 3 ignore the days for which the counts have not yet converged to a final value, and often report a moving average of the positive tests. We propose here a model on the positive test count with reporting lag which enables 'now-casting' of the true count with uncertainty; in other words, by correcting for the underestimate in live reported data, we are able to suitably estimate and impute the actual positive test count and extend the seven-day moving average to the present moment. We also demonstrate how to incorporate the model into a statistical alerting system which is triggered when there is high confidence the reported positive test counts are above a threshold value. Given the pace of the COVID epidemic, there are a number of concurrent works [1, 2, 3, 4] with similar features to our approach. These use either binomial or negative binomial models for test counts combined with spatio-temporal models (an approach widely used in epidemiology for modelling disease risk and spread). In contrast to our model, however, they do not consider reporting lag, and only analyse data once all the results are in. We demonstrate that the reporting lag is in fact predictable, and include it in our model to return a now-cast that incorporates the most recently available reported data. We model the reporting lag using binomial thinning; whilst there already exist well-founded mechanisms for building auto-regressive binomial thinning models on count data [5] , we choose instead to learn the thinning rates directly from empirical data to avoid restricting the dynamics of the lag to a particular form or order of auto-regressive prior. With this approach we gain the additional benefit of finite-time saturation to a fixed value, which is a property observed in all sequences of lagged reports. We combine empirical priors on the under-reporting and a generative model in a time series prior, providing a posterior distribution on the underlying intensity of the disease. In Figure 1 we show the posterior distribution on the time series of counts for Leeds in December alongside the true and reported counts as an example of now-casting with uncertainty quantification. Our approach is similar to two recent works that model a lagged reporting process for count data applied to COVID deaths [6] and cases of dengue fever [7] ; both introduce temporal structure and encode week-day and calendar effects into the reporting lag. One of the key differences between our models is the prior on the latent disease intensity. Instead of cubic splines, we impose a random walk prior with drift. The collection of daily lab-confirmed COVID-19 positive test counts are available as open data 4 . The data are stratified, so that the count for a nation is equal to the sum of counts for its regions. These regions are themselves divided into Upper-Tier Local Authorities (UTLAs), and each of these UTLAs is covered by a set of Lower-Tier Local Authorities (LTLAs), with the highest resolution count data available at the LTLA-level. In England, there are 9 regions, 150 UTLAs, and 316 LTLAs. On each day, every LTLA reports a sequence of positive test counts for all test dates up to but not including the current day, allowing for updates to previously reported values ( Figure 2 ). The most recently reported counts consistently underestimate the true count due to the lag in reporting. As time progresses and more tests are processed, the reported value for a given test date approaches the true count with increasing certainty ( Figure 2 ). As a result, for each test date we observe a sequence of monotone increasing reports which eventually converges to the correct count for that particular date. 5 2 Tests here refer to PCR swab tests which detect presence of the virus. We analyse number of tests carried out through the National Health Service (NHS) and Public Health England (PHE) as well as commercial partners (pillars 1 and 2). 3 Summary statistics and visualisations of the latest available data are presently available as a dashboard at https://coronavirus.data.gov.uk. 4 See footnote 3 5 In rare cases an error can lead to an over-estimate report followed by a lower count update. The report on 14 th December updates counts reported on 13 th for test dates from 10 th December onwards, before which they each agree with the true count. Both underestimate the true count for the most recent dates, and will therefore continue to be updated in subsequent reports until the true count is reached. Let i ∈ {1, . . . , 316} index the collection of LTLAs. Let t ∈ {0, . . . , T } index the number of days for which data is available so that x it is an unobserved random variable corresponding to the true count for LTLA i on day t. Let j ∈ {1, . . . , T − t} index the reporting lag so that y (j) it denotes the report for day t on day t + j. Each true count x it is associated with a sequence of observed, reported counts y it = (y (1) it , . . . , y ). For some finite but unknown maximum reporting lag τ it , we observe y Our aim is to specify a model on the true counts x it , given reported counts y (j) it and without knowledge of τ it , in order to infer a distribution on the x it which concentrates on the true value as T increases. We further define the reporting rate at lag j, θ xit , to be the proportion of the true count that is reported at lag j. For historical data such that y has converged to x it , we can study θ (j) it in order to characterise the behaviour of the reporting lag. In the following sections, we omit subscripts on θ (j) it to indicate the random variable obtained by marginalising with respect to that subscript. For example, θ (j) i is the random variable taking value θ (j) it with probability 1/T . We develop our now-casting model on the basis of the empirical behaviour of the reporting rates θ (j) it , which we now describe. Since July 2020, we have observed two distinct modes of predictable reporting behaviour which we refer to as spatially and temporally stable. it across all LTLAs with at least one reported count x it > 100, combined over all days between the 14 th to the 30 th of October. When reporting is temporally stable, we observe that the reporting rates for an LTLA at each lag do not change much with time and may therefore be estimated bŷ (1) Figure 5 shows empirical distributions for θ (j) it marginally across the day-index t for lags j = {1, . . . , 5}. We observe that E[θ (j) ] is increasing in j. The reporting rates are predictable in a fashion supported by our intuition; as lag increases we are increasingly confident that θ (j) it = 1 across all LTLAs, and this state is reached by a coherent sequence of updates to an initial underestimate. It is also clear from Figure 5 that there is enough variation in θ (j) i between LTLAs to warrant their separate consideration for modelling. When reporting behaviour changes slowly with time, we may construct a temporally local set S in November for a selection of LTLAs. We also include England to show that the marginal θ (j) obeys the intuition encoded by observations (i) and (ii) in Section 4; the mean of θ (j) i increases with increasing lag, indicating that more reports are accounted for as time progresses.θ Let N n (i) be the n-hop neighbourhood of LTLA i on the adjacency graph of LTLAs. When reporting is spatially stable we observe that the reporting rates of LTLAs which are close to one-another are similar and so we may estimate a reporting rate for an LTLA from those of its neighbours bŷ In the left panel of Figure 4 we show the performance of a 2-hop neighbourhood estimator for Manchester in October; the reporting rates of neighbouring LTLAs track one another. It is clear though that the reporting is not temporally stable, and so we must rely on spatial estimates alone. In the right panel we measure the performance of the 2-hop estimates (3) against the truth for all LTLAs where we have observed at least one report x it > 100 marginally across the dates between the 14 th and 30 th October. There is a clear linear relationship (R = 0.9) with the truth. In each of sections 4.1 and 4.2 we demonstrate that noisy estimates of the reporting rates θ (j) it can be constructed. In order to avoid underdispersion in models on x it we must capture uncertainty in these estimates. We therefore propose that θ are chosen. Figure 6 illustrates the empirical distribution and resulting fit for two LTLAs in November -as time progresses, the moment matching produces Beta distributions with most of their mass on large values of θ it as expected. Based on the empirical observations made in Section 4, we now describe a model on reports y (j) it which accounts for a predictable reporting lag in order to impute up-to-date predictive distributions on the current true count x it . We model the reported counts y it as a random variable following a binomial distribution conditional on the (unobserved) true count x it and the reporting rate θ (4) . Modern research on epidemic monitoring typically poses negative binomial models on count data, with spatio-temporal regularization for smoothness; here we capture the same effect by proposing that the x it result from a Poisson process and integrating over the rate parameter under a Gaussian random-walk time-series prior. We introduce the model in a modular fashion. To describe the relationship between the true counts x it and the reported counts y it , we treat each y (j) it as a binomial random variable with success probability θ (j) it over x it independent Bernoulli trials. Following the arguments of Section 4, we place beta priors on the θ is a sufficient statistic for x it (Appendix C) and so we build the following hierarchical model: Integrating out each θ yields the following joint distribution on (x it , y : where p(x it ) is a prior on x it which may, for example, be flat across a set of feasible positive test counts given the population of the LTLA under consideration. This joint distribution may be used to draw samples from the marginal posterior p(x it | y it ) with Metropolis Hastings (MH). Whilst estimating the true count directly is clearly important in monitoring the epidemic, a more compelling epidemiological variable is the latent rate which controls emission of these counts. To extend the model, we therefore assume that each x it is the result of a Poisson process with rate λ it so that the hierarchy is now given by When λ it is integrated out under a gamma prior distribution this is equivalent to proposing a negative-binomial distribution on x it , which is a common choice in modelling count data for epidemic monitoring. The joint distribution on (λ it , y which may be used as a MH potential function in order to generate samples from the new posterior of interest ). Estimates of the latent rates λ it may suffer from high variance, particularly in the early stages of reporting when the reported y likely constitute underestimates of x it . To reduce this variance, we encode time dependence in the latent rates, as follows. Let κ it be the difference between Poisson rates so that Further impose an AR1 prior with scale σ i on the sequence κ i,0:T so that given a standard normal random variable t ∼ N (0, 1) we may write This represents an intuitive assumption of local temporal smoothness in the epidemic rate and is a common choice of temporal regularisation; the Kalman Filter [8] , to which our model bears close resemblance, operates under a similar latent transition distribution. The prior induces dependence between each λ it , κ it and the observed data y i,0:T = y i,0 , . . . , y i,T . The key distributions of interest are then the joint filtering and smoothing distributions given by p(λ it , κ it | y i,0:t ) and p(λ it , κ it | y i,0:T ) respectively. The smoothness assumption described in Section 5.3 constrains the sequence λ i,0:T to vary in proportion to the random walk scale σ i . The weekend effects demonstrated in Figure 7 break this smoothness assumption; when there are predictable drops in the test count at weekends it is because fewer tests are taken, rather than any true decrease in the underlying incidence rate of the disease. To capture this effect, we introduce latent random variables z i,t ∈ [0, 1] with prior distribution then let the emission distribution on x it be so that smoothness in λ i,0:T is maintained by allowing z it to capture these observed weekend effects by emitting counts at a reduced rate. In practice a flat Beta (1, 1) prior allows the smoothness assumption to be maintained, though selecting a stronger a and b may be possible if weekend effects are predictable in their magnitude. We can measure the strength of these effects by examining the posterior smoothing distributions p(z it | y 0:T ) on weekend days. In Appendix A.2 we give details on how to evaluate this posterior in the case where each weekend day has its own unique latent effect, but share prior parameters, as well as demonstrating how this addition alters the procedure for determining p(x it | y i,0:T ). From now on we omit the LTLA index i for brevity. Together Sections 5.1-5.4 specify a smooth time-dependent model for count data exhibiting weekend effects and a lagged reporting process. The Directed Acyclic Graph (DAG) in Figure 8 shows the full conditional dependency structure for this model, encoded equivalently by the joint distribution We can derive conditional independence relations between variables at different time points as follows. For day t, denote by θ t = {θ (j) t } T j=t+1 the collection of under-reporting rates for each lagged report and by Ω t = {λ t , κ t , x t , z t , θ t , y t } the collection of latent variables and observations, then applying the d-separation criteria [9] to the DAG in Figure 8 we have the conditional independence relation (14) . Each x t arises from a Poisson process with rate λ t z t , where z t = 1 on weekdays. The report counts follow a beta-binomial distribution with parameters x t , y t , α, β which results from integrating out each θ (j) t under a Beta(α (j) , β (j) ) prior. We will use these conditional independence relations as the basis for drawing samples sequentially from the posterior. 6 Posterior inference For each of the submodels discussed in Section 5 we draw posterior samples with standard Markov Chain Monte Carlo (MCMC) methods. For the time-independent submodels, equations (8) and (13) serve as potential functions for simple MH sampling. In equation (8) the sum over x t can be performed directly for benign 6 choices of prior on x t , when the true x t is expected to be small, or by numerical integration when the prior is concentrated on a region of its support. To sample from p(λ t | y (T −t) t ) we use standard normal proposals and initialise the sampler to the mean of the prior E p(λt) [λ t ]. Since the expected distance for an n-step 1-dimensional Gaussian random walk is √ n, we can be confident that the posterior is well explored by choosing an n such that our worst-case estimate of the absolute error |x t − E p(λt) [λ t ]| is well explored by √ n. In all cases we apply thinning and burn in to the sample chain, though no hard convergence checks are made. In the time-dependent case, we make use of the conditional independence relation (19) to construct an algorithm for filtering and smoothing which builds on the simple MH sampling schemes as those used for inference in time-independent models. Inference mechanisms under the conditional dependencies induced by the random-walk prior in Section 5.3 are well studied [10] and so we give here the equations necessary for sequential posterior sampling of the latent λ t , κ t and x t . At its foundation the complete model of Section 5.5 is Markov on a continuous latent state space with a non-conjugate emission distribution. We therefore employ a forward-backward algorithm, and deal with the non-conjugate structure by performing MH sampling at each time step. The atomic distributions constructed from these samples are propagated forwards as a prior to the subsequent time step, and re-sampled backwards for smoothing; this simple particle Sequential Monte Carlo (SMC) algorithm differs from the canonical bootstrap filter since each filtering distribution is sampled directly rather than re-sampled from the previous predictive distribution 7 . In what follows we dispense with the LTLA index for brevity and outline the strategy for sequential sampling without weekend effects. The interested reader may refer to Appendices A. 1 where the sum over the atoms Λ t−1 results from approximating p(λ t−1 , κ t−1 | y 0:t−1 ) by the atomic distribution on MH samples. Sampling from p(λ 0 | y 0 ) is done exactly as in the time independent case using a potential given by (13) ; by induction we can therefore compute a sequence of joint distributions of the form (20) such that we may draw samples from the sequence of joint filtering distributions p(λ 0 , κ 0 | y 0 ), . . . , p(λ t , κ t | y 0:t ). We can construct the smoothing distributions by re-sampling these filtering atoms in a backward pass. Write the j th smoothing atoms as L (j) t+1 , ζ (j) t+1 ∼ p(λ t+1 , κ t+1 | y 0:T ). Then the re-sampling probabilities are where the weights and normaliser are given by t , σ 2 and w * j = N i=1 w ij respectively. The full procedure for inference is given by Algorithm 1. In Figure 9 we show the result of learning this smoothing distribution with and without weekend effects included. Given the smoothing atoms for each λ t we can compute an approximate smoothing distribution for each x t by which for time T gives us our required now-cast of the true count in the face of reporting lag. See Figure 14 for example now-cast showing uncertainty existing on recent counts and diminishing with time. Sample: N atoms {Λ (i) 0 } N i=1 from p(λ 0 | y 0 ) for t ← 1 to T do f (λ t ) = p(y t | λ t ) N i=1 p(λ t | Λ (i) t−1 ); Sample: N atoms {Λ (i) t } N i=1 by MH on f (λ t ) Set: ζ (0) T , . . . ζ (N ) T ← Λ (0) T , . . . , Λ (N ) T for t ← T − 1 to 0 do Sample: N atoms {ζ (i) t } N i=1 by re-sampling {Λ (i) t } N i=1 with probabilities (47) return p(λ t | y 0:T ) ≈ 1 N N i=1 δ λ t − ζ (i) t ∀t ∈ 0, . . . T In public reporting of the positive test count, the UK government uses a windowed average computed over a seven-day period. This simple mechanism is to an extent well motivated as an approximation to linear dynamical models. For simplicity, consider modelling only the latest reported counts for times where y (T −t) t = x t by a Kalman Filter with no drift so that Let the Kalman Gain (see Appendix D) for time-step t be K t ∈ [0, 1]. The expectation of the Kalman filtering distribution at time-step t may be written as which expands recursively to give (27) is a weighted average of the observed data. It is clear that w t > w ≤t−2 and further that if K t > 1/2 then w t > w ≤t−1 so that the most recent observation has the largest weight. This gives a rough interpretation of the filtering distribution as a weighted average of the data with most weight given to the most recent observations. In the absence of implementing a full model, we therefore suggest that a windowed average with decaying weights constitutes a type of posterior point-estimate for the Kalman Filter, which is in itself a reasonable approximation to our model for times where reported counts have converged to the truth; Figure 10 shows a comparison between our model, and a number of weighted moving averages which track the expected value of the smoothing posterior on λ t . Our model includes one free parameter: the scale σ of temporal smoothing applied by the random walk prior on the sequence κ 0:T (see equation 14) . The choice of σ influences the set of feasible posterior distributions on λ t ; in the absence of prior information on the rate of change of the disease it is important to choose σ to best explain the data observed. We take an empirical Bayes approach, and maximise the model evidence [11] p(y 1: over a feasible set of σ. From the perspective of a windowed average, this roughly corresponds to choosing window length and weights that best fit the observed data. Figure 11 demonstrates the influence of σ on the smoothing posterior. When the scale is too small relative to the variation in the counts, the posterior intensity cannot change quickly enough to capture well the most recent reported counts and risks missing changes in trend; when the scale is increased, the posterior intensity can decrease quickly enough to explain the observations. One of the UK government's main strategies for pandemic control has been to impose restrictions at the LTLA level in response to increasing case numbers. For our model to be useful in informing these decisions we must therefore have confidence in its predictions. Moreover, we would like to make assertions of probability on whether or not case numbers are following an increasing trend, or have exceeded a threshold of concern. We now describe how our model can be used to 1) monitor systematic reporting errors, and 2) act as an alert system at the LTLA level. During the course of positive-test reporting thus far, there has been one major reporting error. A technical oversight resulted in the omission of 15,841 positive tests recorded between the 25 th of September and 2 nd of October, prior to a corrective backdate commenced on the 3 rd of October. This fundamental change in reporting should be detectable under a well-calibrated model. The posterior predictive distribution p(x t+1 | y 0:t ) and lag-j smoothing distribution p(x t+1 | y 0:t+j ) each assign mass to a set of positive test counts at time t + 1 which are credible according to the model, but accounting for observations up to times t and t + j respectively. When the reporting process is well matched with the assumptions of our model, we expect these distributions to be more consistent than when there are large systematic reporting errors. As a mechanism for detecting these errors, we propose the following consistency statistic: When j is large, the smoothing distribution reduces to a delta function on the truth, and the statistic amounts to measuring how well the predictive distribution captures the true value. Extending this reasoning to the sequence of lagged reports for time t + 1, we recover the conditional evidence p(y t+1 | y 0:t ) by integrating out all unobserved random variables. Since y (j) t under-reports x t when j is small, we may choose to evaluate as an aggregated measure of how well the model has captured reports which have not yet converged to a fixed value, which may constitute early warning signs against systematic reporting errors. A fundamental assumption of the model (18) is that the positive test-counts x t at time t are a-priori Poisson distributed with rate λ t on weekdays 8 , and so Var [x t | λ t ] = λ t . Although windowed average techniques (see Section 6.3) may prove to be good estimators of the expected posterior on λ t for times t such that we have observed y (j) t = x t , these methods cannot include under-reports, or make statements of probability. For an alerting system that is both stable and probabilistic, we can examine the posterior behaviour of the λ 0:T and κ 0:T . The difference |κ t − κ t+1 | may only substantially exceed σ when there is sufficient posterior evidence to do so after observing y 0:T . This makes the marginal smoothing posteriors p(λ t | y 0:T ) and p(κ t | y 0:T ) ideal for establishing if the intensity of the epidemic has crossed a threshold, and whether or not the sequence of intensities is increasing. For some threshold value V we can use the fraction of smoothing particles which are larger than V Figure 12 : Example alert mechanism for Thurrock detecting if λ is above a threshold value. Top: the smoothing posterior on λ as well as the reported counts and a threshold V . Bottom: Probability that λ is above the threshold value at each time step. Figure 13 : The smoothing posterior on the drift κ t for Thurrock showing λ is following an increasing trend from 6 th December on-wards. to estimate an alerting probability. In Figure 12 we give an example of an alert for Thurrock between the 26 th of November and the 13 th of December and in Figure 13 we show how the smoothing posterior on κ 0:T is easy to interpret visually as a description of whether the intensity is increasing. Under the model described in this report, the now-cast on x T inferred from all available data is given by the smoothing distribution p(x T | y 0:T ). In Figure 14 we display this now-cast for two LTLAs, as well as the smoothing distribution for times t < T ; the uncertainty on x t reduces with lag so for times far enough in the past the now-cast reflects our confidence that the reported count is the truth. We show results over a two week period in December when the UK was transitioning out of lockdown, where the model we propose may be the most useful in helping to monitor changes in the total number of positive tests. Though any statistical model on positive test cases provides the benefit of uncertainty quantification over non-statistical methods, the success of our model in deployment depends on its ability to improve upon simple techniques for estimating key indicators on the state of COVID-19. The 7-day windowed average on reports at the LTLA level, taken up to 4 days 9 before present day, is such a simple and publicly reported baseline. We may use the latest week of smoothing distributions to estimate the 7-day windowed average up to present day, rather than 4 days ago, so that decisions on local restrictions may incorporate information from under-reported count data. The models use a random walk scale selected to optimise the evidence as suggested by Section 6.4. In absence of the full joint distribution p(x t−7:T | y 0:T ) we estimate the lag-j 7-day windowed average for day T WA To measure the performance of (34) we assume the lag-7 reports have converged to x t and measure the absolute error AE (j) Figure 15 shows the distribution of absolute errors for the imputed 7-day windowed average across all LTLAs alongside the performance of the current strategy of ignoring under-reports. It is clear that using the expected smoothing distribution as a now-cast improves the mean absolute error at early stages of reporting, and further that there is a significant reduction in the variance of this estimator compared to the baseline. We have presented a probabilistic model for time series count data with weekend effects and a lagged reporting process. We demonstrated that it is possible to use incomplete, lagged reports and impute the true count to obtain a seven day moving average that extends to the present date. Conditional on the now-cast being accurate, this is preferable to waiting for the most recent counts to converge before considering their use in monitoring the progress of the COVID-19 epidemic as it permits faster response to changes at the LTLA level. We also directly model the underlying intensity of the disease which lends itself naturally to probabilistic alerting strategies when incidence rates are increasing or have passed above some threshold value. Our approach relates closely to a number of concurrent works. In [1] a negative binomial model on swab tests is proposed, which is similar to that specified in Section 5.2 when considering the marginal posterior on x it with λ it integrated out. They further posit a spatio-temporal dependence on the Poisson rates which extends the work described here. A key difference between [1] and here is the inference mechanism; [1] employs restricted maximum-likelihood parameter estimation, whilst we pursue a fully Bayesian treatment such that all parameter uncertainties are propagated through to quantities of interest. The model described in [2] shares the same binomial likelihood as proposed here. They couple the 442 Scottish post-code districts spatially with a Gaussian Markov Random Field (GMRF). A number of internal notes from PHE have also studied outliers under binomial likelihood models with some spatio-temporal dependence [3, 4] . Generally, LTLAs which are spatially close may exhibit similar trends in the number of positive tests, and so we may expect a spatial coupling between neighbouring LTLAs to yield performance improvements. The most common models for dealing with spatial dependence are typically based on GMRFs; the Besag York Mollié (BYM) model [12] posits spatial dependence through such a field, and although we do not employ this spatial structure here, a number of algorithms for posterior inference under this construction [13, 14] have been used to study COVID-19 [15] . We note however that in this work we have observed mixed reliability in the spatial dependence of reporting rates. Two recent works have also modelled a lagged reporting process [6, 7] ; both consider a temporal structure which codes calendar effects into the reporting lag, where the lagged counts are modelled as a generalised Dirichlet-multinomial distribution. This may deal more naturally with the reporting lag than our use of beta-binomial distributions on the latest available cumulative reports, though neither offers a posterior distribution on a latent intensity and would therefore have to rely on different strategies for probabilistic alerting than those justified in Section 7.2 here. The accuracy of our model in now-casting the number of positive tests relies on stability of the reporting prior. We suggest that building a reporting prior against testing laboratories rather than LTLAs may make available tighter prior estimates for the reporting rate -since reports for an LTLA constitute an aggregation of reports from testing sites serving that authority, variance in the rates may arise from differences in reporting behaviour across these testing sites. In fact, since the end of October, each lab has provided capacity estimates in publicly available data 10 . It may be possible therefore to estimate lag at the lab level, conditional on the testing load faced by each lab. In what follows we specify filtering and smoothing distributions for each time step t, from which we can draw samples but cannot compute in closed form. Denoting these samples by Λ . . , y 0 ) and L (j) t , ζ (j) t ∼ p(λ t , κ t | y T , . . . , y 0 ), we build the atomic measures as particle approximations whose accuracy is determined by the number of samples N and M and the nature of the sampling scheme employed to draw these samples. For the forward pass we build a sequence of conditional distributions by p(λ t , κ t , y t | y 0:t−1 ) = p(λ t , λ t−1 , κ t , κ t−1 , y t | y 0:t−1 )dλ t−1 dκ t−1 (38) from which we can draw samples with MH by first proposing a step in κ t from a symmetric distribution, and then drawing λ t uniformly from {κ t + Λ We may use the conditional dependency structure of the model to write p(λ t , κ t | y 0:T ) = p(λ t , κ t , λ t+1 , κ t+1 | y 0:T )dλ t+1 dκ t+1 = p(λ t , κ t | λ t+1 , κ t+1 y 0:t )p(λ t+1 , κ t+1 | y 0:T )dλ t+1 dκ t+1 (43) = p(λ t , κ t | y 0:t ) p(λ t+1 , κ t+1 | λ t , κ t ) p(λ t+1 , κ t+1 | y 0:t ) p(λ t+1 , κ t+1 | y 0:T )dλ t+1 dκ t+1 (44) = p(λ t , κ t | y 0:t ) p(λ t+1 | κ t+1 , λ t )p(κ t+1 | κ t ) p(λ t+1 , κ t+1 | y 0:t ) p(λ t+1 , κ t+1 | y 0:T )dλ t+1 dκ t+1 (45) so that if we have the smoothing atoms {ζ (j) t } T,M t=1,j=1 then we can evaluate (53). With the addition of weekend effects we must alter the smoothing procedure for each x t by considering p(x t , λ t , z t | y 0:T ) = p(x t | λ t , z t , y 0:T )p(λ t , z t | y 0:T ) (54) = p(x t | λ t , z t , y t )p(λ t , z t | y 0:T ) (55) = p(y t | x t )p(x t | λ t , z t ) p(y t | λ t , z t ) p(λ t , z t | y 0:T ) Now for the joint smoothing distribution on λ t , z t we have p(λ t , z t | y 0:T ) = p(z t | λ t , y 0:T )p(λ t | y 0:T ) (57) = p(z t | λ t , y t )p(λ t | y 0:T ) = p(y t , z t | λ t ) p(y t | λ t ) p(λ t | y 0:T ) = p(y t | λ t , z t )p(z t ) p(y t | λ t ) p(λ t | y 0:T ) which we may employ to evaluate (56). By the sum and product rules of probability we may write p(y 1:T ) = p(y 0 ) T t=1 p(y t | y 0:t−1 ) (61) = p(y 0 ) T t=1 p(y t , λ t , λ t−1 , κ t , κ t−1 | y 0:t−1 )dλ t dλ t−1 dκ t dκ t−1 (62) in (64), we can also approximate the remaining integral by sampling from the Gaussian mixture built from transition distributions centered on the filtering particles at the previous time step. This yields an approximation to the evidence at each time step; sampling some κ t = K (i) t immediately implies which Let v t = Var p(xt|y0:t) (x t ) be the variance of the filtering posterior at time-step t. The Kalman Gain is given by which in effect controls the weight given to the most recent data point in computing the filtering posterior; the relative scale of v t−1 and σ y determines this weighting. COVID-19 in England: spatial patterns and regional outbreaks A spatio-temporal Covid-19 surveillance tool for Scotland using telehealth data PHE Joint Modelling Cell and PHE COVID Outbreak Surveillance Team. COVID-19 local exceedance reporting Matt Keeling & the Warwick COVID modelling team. Preliminary exceedance methods Binomial thinning models for integer time series Nowcasting covid-19 deaths in england by age and region. medRxiv Multivariate hierarchical frameworks for modeling delayed reporting in count data A new approach to linear filtering and prediction problems Causality: Models, Reasoning and Inference A tutorial on particle filtering and smoothing: Fifteen years later On the marginal likelihood and cross-validation Bayesian image restoration, with two applications in spatial statistics Bayesian hierarchical spatial models: Implementing the besag york mollié model in stan Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations Spatial bayesian hierarchical modelling with integrated nested laplace approximation The authors thank the Joint Biosecurity Centre (JBC) for the proposal of and input to this problem. In particular we acknowledge Dmitry Semashkov for significant engineering effort in the deployment of this model. We also thank Theo Sanderson for providing a consistent and up-to-date repository of snapshots on open data 11 . which indicates that each smoothing step is a multinomial resampling of the corresponding filtering atoms with weights (47). For smoothing the counts x t we begin with the jointand soAppendix C The most recent count yit to be the first-order differences of yand under a multinomial model for zwe demonstrate here that yit is a sufficient statistic for x it , i.e. that p(ywhere xit ∝ denotes proportionality with respect to x it .It is clear by definition (65) that either of z