key: cord-0035756-touf4gow authors: Bettencourt, Luís M. A. title: An Ensemble Trajectory Method for Real-Time Modeling and Prediction of Unfolding Epidemics: Analysis of the 2005 Marburg Fever Outbreak in Angola date: 2009 journal: Mathematical and Statistical Estimation Approaches in Epidemiology DOI: 10.1007/978-90-481-2313-1_7 sha: abfe38c8361c5ec4aeec39a59fc234dfe55ff610 doc_id: 35756 cord_uid: touf4gow We propose a new methodology for the modeling and real time prediction of the course of unfolding epidemic outbreaks. The method posits a class of standard epidemic models and explores uncertainty in empirical data to set up a family of possible outbreak trajectories that span the probability distribution of models parameters and initial conditions. A genetic algorithm is used to estimate likely trajectories consistent with the data and reconstruct the probability distribution of model parameters. In this way the ensemble of trajectories allows for temporal extrapolation to produce estimates of future cases and deaths, with quantified levels of uncertainty. We apply this methodology to an outbreak of Marburg hemorrhagic fever in Angola during 2005 in order to estimate disease epidemiological parameters and assess the effects of interventions. Data for cases and deaths was compiled from World Health Organization as the epidemic unfolded. We describe the outbreak through a standard epidemic model used in the past for Ebola, a closely related viral pathogen. The application of our method allows us to make quantitative prognostics as the outbreak unfolds for the expected time to the end of the epidemic and final numbers of cases and fatalities, which were eventually confirmed. We provided a real time analysis of the effects of intervention and possible under reporting and place bounds on population movements necessary to guarantee that the epidemic did not regain momentum. Over the last few years mathematical epidemiology [1, 4] has taken an increasing interest in the quantitative study and prediction of unfolding epidemic outbreaks [2, 3, 5, 6, 21] . This is both motivated by the spectacular progress in information technologies, which allow for the spread of epidemiological information worldwide in real time, but also to the increased monitoring of emerging infectious diseases [9, 12, 14, 22, [24] [25] [26] such as H5N1 influenza, as well as of potentially engineered biological threats [10] . The well-established tools of mathematical epidemiology built primarily for a posteriori analysis of outbreaks [1, 4] are however in several respects inadequate to measure and predict the course of unfolding epidemics. The main challenge arises from the necessary confrontation of model predictions to future data, which must be probabilistic. Standard epidemic models, such as SIR or SEIR [1, 4] , are deterministic and make a prediction for the average number of cases or deaths incurred during an outbreak. It is expected that data for large outbreaks is representative of that mean and a trajectory that fits well these data in terms of a goodness of fit measure is well accepted as the canonical procedure to estimate average epidemiological parameters. The situation is murkier when outbreaks are small or, more to the point, when predictions from the models are to be confronted with new observations. Then the probabilistic nature of contagion becomes manifest in that no number of actual cases or deaths will usually match the predicted mean value. Thus to assess whether a model is representative of the epidemic under way it is necessary to add to this type of prediction a measure of quantified uncertainty [2, 3] , e.g. in the form of a confidence interval. At that level of confidence we can then reject a model if future predictions fall outside the predicted interval (through a simple p-test), or otherwise accept the model as predictive. This article introduces a methodology to do just this. It starts from the standard mean field models of epidemics and takes them, as specified by their initial conditions and parameter values, which we collectively denote Γ, as a possible trajectory of the outbreak. Many such trajectories are proposed via a stochastic update rule (a variant genetic algorithm) and weighted in terms of their agreement with the data at a prescribed level of uncertainty. This allows us in turn to reconstruct a probability distribution on Γ and estimate epidemiological parameters and any of their correlations with quantified uncertainty. The remaining of this paper introduces the mathematical ensemble trajectory method and the associated estimation procedure, and then proceeds to apply it to an outbreak of a poorly known disease: Marburg hemorrhagic fever in 2005 in Angola, for which it was developed. The method made early accurate predictions of the final toll of the epidemic and its termination time and revealed erroneous trends in late reporting. In this section we give a general description of the stochastic parameter estimation procedure. We start from the observation that simple (homogeneous mixing) population models, cannot be expected to give perfect descriptions of any actual data set. This always results in a minimum level of discrepancy between the best model output and the data. We parameterize this discrepancy by the absolute value deviation between the best model prediction and each data point, per point. This is called the least deviation per datum (ldpd) where X i M (t j ) is the ith state variable (e.g. deaths D, or number of cases C, see below), as an output of the model (a function of a parameter set Γ) at observation time t j . N X is the number of variables constrained by data and N O is the number of observation points. This measure allows us to discuss and compare how good models are at describing a specific data set, i.e. their goodness of fit. Secondly, we expect in general that data contain errors, e.g. due to under reporting, false positives, accounting errors, etc. An allowable level of uncertainty in the data will then translate into an ensemble of acceptable model of solutions or trajectories, which correspond in turn to a set of initial conditions and model parameters, which we write {Γ}. Each Γ in this set can then be weighted by their goodness of fit in a way that generates an estimate of the probability distribution function for the ensemble of model parameters that is compatible with the data. As a whole this is a stochastic optimization problem (see, e.g. [19] for a general discussion). Based on this idea we perform an estimation of the joint parameter distribution of model parameters P(Γ), conditional on a set of allowable deviations per datum. To be more specific we write that the unknown exact data point X E (t j ), can be expressed in terms of the observed datum X O (t j ) and an error ξ(t j ) as The error ξ(t j ) is only known statistically so that in order to proceed we need to specify a model for ξ. Because we expect the variance of the error to be bounded we assumed a Gaussian distribution for ξ, such that where the standard deviation σ(t j ) parameterizes the allowed discrepancy between model outputs and data and is to be specified through general expectations on the data. This expectation for the errors defines implicitly an objective function that can be minimized to produce optimal parameter estimates through a search procedure. For example, for each model realization in terms of a set of parameters in a SEIR model Γ=[S(t 0 ), E(t 0 ), I(t 0 ), D(t 0 ), R(t 0 ), β,ε,γ,p] we take this function to be which is an implicit function of Γ. If the model could generate exact results we could then make the natural association X E (t j ) → X M (t j ). This is usually not the case, since a residual minimal deviation always persists, the minimum ldpd. To account for this we normalize this function to zero by taking H(Γ) = A(Γ) -A(best Γ), i.e. by subtracting the minimal value of A(Γ), obtained for the best parameter set. Given this choice of H we can produce, in analogy with standard procedures in statistical physics, a joint probability distribution for model parameters. Since we only have expectations on A(Γ) (and not higher moments A 2 , A 3 , etc) the maximum entropy distribution in We can now see how this distribution can be reconstituted from sampling many realizations of the model in terms of different Γ. Note that the probability of each trajectory w(Γ) is Then P[Γ|{X O i }] can be estimated from many trajectories N t as Figure 1 illustrates an ensemble of trajectories with variable degree of goodness of fit; trajectories with large deviations to the data points are exponentially suppressed in their contributions to the parameter distribution. This joint probability distribution can then be used to compute any moment of any set of parameters in Γ, including single parameter distribution functions, and cross-parameter correlations, such as covariances, but also any higher moments. The prediction of future observations can now be obtained by convolving the model with the parameter probability distribution estimated to that point as In practice the estimation procedure via trajectories, each corresponding to a parameter set, is potentially difficult because we are dealing with an inverse problem in which, given a trial set of parameters, comparison with the data is performed only after the non-linear model dynamical equations have been solved. Fortunately for models that consist of small numbers of ordinary differential equations the computational effort is relatively trivial on a modern computer. In every case discussed below, we used an ensemble of trial solutions, from which we select a number of best sets Γ, according to a standard Monte Carlo procedure, weighted by Eq. (5), to generate the next generation of the ensemble. In order to do this we introduce a mutation implemented in terms of random Gaussian noise around the previous best parameter set. This mutation, followed by the selection of minima, yields an effective downhill search method, capable of exploring large regions of parameter space. It also creates as a byproduct an ensemble of good strings with small deviations to the data. For small enough deviations from the best string we can sample parameter space in an unbiased manner. It is this ensemble, and its best string, that is then used to estimate Eq. (6) . Results given in the manuscript involve ensembles with several million realizations and a choice of σ, common to all data points, corresponding the 20% of the ldpd. The standard deviation σ can be made to vary from point to point if more information about the quality of the datum is available. In this sense the procedure is able to incorporate variable expectations of uncertainty as the data are collected. We now proceed to describe the application of the method to estimate in real time the course of an outbreak of the rare Marburg hemorrhagic fever in Uige, Angola during 2005. This example posed many of the challenges that lead to the development of the present methodology. The pathogen is rare and its epidemiological parameters were largely unknown beyond the observation of the apparent incubation time and time from incidence of symptoms to death in a handful of cases. The mortality was also extremely high and took the lives of many of the medical care providers that intervened early in this remote African region. Because the disease affected disproportionally children under five the implementation of isolation control measures was also extremely difficult, and their results uncertain at the time according to reports by the World Health Organization and journalists on the ground. Nevertheless even with very scant information, model predictions were accurate from one case report to the next and detected successfully the over-counting of cases and deaths that characterized late stage reports. The current work uses data from WHO reports, freely available online or via email, together with basic epidemiological modeling to generate a characterization and outlook for the outbreak. As sparse as the data are, we hoped that our results would help quantify the progression of the disease and assess the efficacy of intervention efforts necessary to stop the epidemic. Section 3.1 gives general background information on the disease and the anatomy of the outbreak as far as it was reported at the time in medical journals and the general media. Section 3.2 describes the specific model and parameter estimation procedure. Section 3.3 analyses the scenarios of progression for the disease, taking into account the data points and some qualitative information in WHO reports. In this way we were able to estimate the effect of the interventions started shortly after March 23, in lowering contact rates, and discuss here the effects of under reporting and place bounds on population movement restrictions, so as not to reignite the epidemic. We also estimated the time horizon at which the spread would cease, as well as the final number of cases and fatalities. The 2005 outbreak of Marburg hemorrhagic fever in Angola [13, [15] [16] [17] [18] 27] has highlighted the direst need for fast and creative intervention in the face of the most severe infrastructure constraints imaginable. Intervention measures, which are the only way to stop the progression of the disease in the absence of a cure, were met with significant levels of noncompliance from the population. Due to the extremely high mortality and lack of adequate preparation initially, health workers took a large toll of the early deaths, eroding confidence in their effectiveness. Furthermore the viral strand attacked primarily children under five, making it extremely difficult for families to entrust them to the healthcare system under the knowledge of very probable fatality. Under these circumstances it was paramount to provide the best quantitative guidance and prognosis for the outbreak in real time so that limited resources can be allocated optimally. This is now starting to be possible, thanks to several outbreak surveillance and news systems, provided by the World Health Organization (WHO) [27] , Pro-Med mail [15] , CDC [17] and others. We used these reports to generate a data series, analyze the outbreak, and at each time elaborate scenarios for the future course of the outbreak. These can also be used to help gauge the effectiveness of the current levels of intervention and establish quantitative goals for new and/or increased measures aiming at stopping the epidemic. The 2005 outbreak of the Marburg fever in Angola was uncommon in several respects [13, 15, 20, 27] . It was the largest of the disease to date in a general population, it had an extremely high case fatality rate (88%, compared with 23% and 70% in previous smaller outbreaks) and attacked disproportionately children under five (75% of the cases). Marburg fever symptoms in their earliest stages are non-specific. The condition can be easily confused with other more common endemic diseases in the region such as malaria, yellow fever and typhoid fever, an issue that lead to biases in reporting, especially once awareness was raised after the identification of several hundred cases and deaths. Estimates for several of the outbreak's relevant rates are [27] , an incubation period of about 3-9 days, a time to death (2005 outbreak) of 3-7 days after onset of symptoms, and a high proportion of cases develop hemorrhagic symptoms within 5-7 days. The Marburg virus is a member of the family Filoviridae, which also includes Ebola. Marburg however is much rarer. The reservoir of the disease remains unknown (some clues point to bats or other cave dwelling animals [11, 20] ). Primates can carry the virus but also contract the disease and manifest symptoms. Uíge is a tropical province in the interior North West of Angola, bordering the Democratic Republic of Congo. The total population of the province is estimated at about half a million people and is mostly rural. In 2005 Uíge's province two largest cities were Uíge, with about 170,000 people, and Negage with about 25,000 people. These cities' hospitals serve most of the province's population. The population of Angola is young (43.5% under 14) and with high fertility rate (6.33 children/woman), creating conditions for very high and effective transmission of the Marburg virus. Intervention efforts by the Government and the World Health Organization started in earnest in March 23, 2005 We use a simple modification of the standard SEIR epidemic model, in order to account for the high mortality rate of the present outbreak. The model applies to homogeneously mixing population and thus does not distinguish individuals by e.g. age, a factor that is important in the current outbreak. The data necessary to draw such distinctions, if it exists at all, is not available in the public domain. Because most cases have occurred in Uíge, or are thought to have originated through contagion incurred there, we will take the total number of cases and the total number of fatalities as the targets for parameter estimation. The SEIR model [1, 4] has been shown to describe well the outbreak dynamics of the related Ebola virus [7] . Specifically our model is: Here, as usual, S(t) are the number of susceptibles at time t, E(t) the number of exposed, which naturally progress to manifest the disease as Infective I(t). D(t) is the number of fatalities at time t, whereas R(t) is the number of recovered. With these choices the population, summed over all classes, is fixed. The total number of cases, tallied at time t, is the sum of the presently infected, deceased and recovered. The incubation time is parameterized by ε −1 , β is the contact rate, which is the product of the (assumed independent) probability of a contact between an infected and a susceptible and the effectiveness of that contact. Lowering the value of β is the target of intervention [7] . The mean time spent in the infective class is γ −1 , after which an individual transits to the recovered class with probability (1-p) and dies with complementary probability p. The set of parameters Γ={S(t 0 ), E(t 0 ), I(t 0 ), D(t 0 ), R(t 0 ), β,ε,γ,p}, i.e. the initial conditions for each of the state variables and the dynamical parameters, is the target of our estimation procedure, as described in Section 2. Parameter estimations will be bound within intervals dictated by knowledge of the outbreak [27] . These intervals are summarized in Table 1 below: We started tracking the outbreak in the beginning of April shortly after it was first identified on March 23. Our first predictions were made on April 26. At that time there were two possible viable scenarios for the history of the outbreak: one where it had started soon before March 23 (which we will call the simplest scenario) [27] , and was eventually confirmed by our estimation procedure. We proceed to tell a brief history of our prognosis as it happened. In the simplest scenario we constrain the model by the estimated number of cases and deaths as reported by WHO, without any other further constraints. Below we consider the fact that the epidemic is though to have started in October 2004 as an additional qualitative constraint. The best fit trajectories for cases and fatalities are shown in Fig. 1 , together with the data points, while parameters are displayed in Table 2 . These estimates predict an incubation time and lifetime of the infective state to be on the shorter end of their allowed ranges and mortality at the higher end. The contact rate is high leading to a large basic reproductive number, which measures the expected number of new cases caused by the introduction of an infective individual in a population of susceptibles. Given the population conditions, the high infant mortality due to the disease, and cultural practices of care for the ill and deceased we believe these numbers could not be excluded. This estimate predicted that the outbreak was then nearly over. The number of new infected cases was dropping in time. Its final state would be reached around May 9, with a total number of cases of 276 and 261 deaths. The upper end of the 95% confidence level intervals, shown in Fig. 2 , would take these numbers up to 304 cases and 287 deaths by May 9-10. We show below that this scenario could be rejected as more data eventually came in. Figure 3 shows the 95% confidence level intervals for number of cases and deaths. This is drawn from an ensemble of about 100,000 realizations of the model that fit the data within 20% of the best fit shown in Fig. 2. The effectiveness of intervention can be assessed by allowing the contact rate β to vary in time. This strategy was used by Chowell et al. [7] to model intervention in recent Ebola epidemic outbreaks in Uganda and the Democratic Republic of Congo. The varying contact rate can be parametrized as [7] where t int is the time at which intervention starts. κ is the time for the intervention to set in and β 0 and β 1 are the asymptotic contact rates before and after. We chose t int to be March 23, when WHO reported for the first time to be "supporting efforts by the Ministry of Health in Angola to strengthen infection control in hospitals, to intensify case detection and contact tracing, and to improve public understanding of the disease and its modes of transmission" [27] (March 23 report). We find a modest but significant change in contact rates from β 0 =1.534 ± 0.013 to β 1 =1.401 ± 0.010 over a period of just over 10 days. i.e. a decrease in the contact rate of about 8.7%. This gives the best fit to the data of all scenarios with ldpd=4.51. This change in contact rate highlights both the monumental efforts on the ground to contain the spread of the disease and the amply reported [8, 13] resistance they encountered, due, in large extent to the unkind characteristics of the disease. Although the model estimated that the epidemic was then contained there can still be population movements that escape health care intervention, so that more people can enter the susceptible class. These effects can be monitored in real time via the estimation of the critical number of additional susceptibles that will cause the epidemic to regain momentum. The simplest estimate follows from asking what number of susceptibles will reignite the growth of infected (i.e. make dI/dt>0). From Eqs. (1) this is For S>S * the number of new infections will grow. We can further write S * = S now + ΔS and similarly N=N now + ΔS, where S now , N now are the present numbers of susceptible and of the population participating in the epidemic (i.e. the sum of numbers over all classes) and ΔS is the critical number of additional susceptibles. Given best parameter estimates for the simple scenario on April 27 this resulted in ΔS ≈ 70 individuals, which is clearly a very small fraction of the general population. We repeat this procedure in other scenarios below. In discussing the results of parameter estimation in this simplest scenario we found that both the case fatality rate and γ appeared at the higher end of their allowed ranges. In this section we discuss how this may be the result of case under reporting. Under reporting is probable given the remoteness of the region and the initial resistance to intervention efforts amply reported in the news [8] . To estimate the effects of under reporting we assume that number of infected reported cases I(t) is in fact a (assumed fixed) fraction λ of the real number of total cases I tot (t), so that We also assume that the fraction of under reporting in deaths is much smaller, so that effectively D(t) ≈ D tot (t). We can therefore ask for the transformation in parameters that leave the dynamics of deaths invariant under the rescaling of infected. The equation become Thus, since λ > 1, this implies that the actual mortality is lower than estimated, and/or that the lifetime of the infectious state γ −1 is longer. If we ask e.g. that γ −1 = 5 days and that the mortality is similar to that observed in the previous outbreak of Marburg fever in the Democratic Republic of Congo (about 70%), we would obtain suggesting that less than half of infected cases may have been reported. This was at the time most probably an overestimate. If we allow the mortality to remain above 90% then λ ≈ 5/3 = 1.67, which still suggests a large fraction of unaccounted cases if the simplest scenario was to hold. This transformation has also implications for the evolution of E and S, but as these states are unconstrained by the data we shall not discuss such features here. Whether case underreporting was an explanation of the high estimates for p and γ or the modeling of the progression of the infective state was too simple in this scenario, was an issue that required more data. It was resolved by the release of the next two data points, see below. There is evidence, based on retrospective analysis [27] (March 23 report), that the epidemic started in October 2004. This section enforces such constraint in the parameter estimation procedure, analyses resulting parameter ranges and uses them to make prognoses for the development of the epidemic. There are two caveats in performing parameter estimates under these circumstances. First, the start of the epidemic in October 2004 introduces a constraint, 4-5 months (158-121 days taking the beginning and end of the month as bounds) before the first number of cases was announced. Estimating epidemic parameters under such distant constraint is delicate and tends to lead to high sensitivity in the parameter search. As such it is intrinsically more difficult to guarantee a fair sampling of all possible solutions consistent with the data, at some error level. Second, in the very early stages of the epidemic a stochastic model is probably more appropriate than (9) , which makes a number of assumptions about a homogeneously mixing population, and the applicability of averages to single instance data. As such the results of the present estimation should be considered more susceptible to systematic error than those given above. With these caveats in mind we proceed with the estimate. Results are shown in Table 3 and in Fig. 4 . The essential qualitative consequence of enforcing that the epidemic started in October 2004 is to make the derivative in the solution for the total number of cases be positive, if small, when the virus started being tracked on March 23. Because the model (9) is monotonic in the total number of cases and deaths this necessarily generates a solution with a larger positive derivative at those first few data points. Taken at face value this constraint had two consequences: (i) it suggests that initial reported number of cases (until about March 31) were underestimates of the real numbers, (although the number of deaths is well fit by the model, and may thus not have been itself underestimated) and (ii) it led to a higher estimate -relative to the simplest scenario above -of the eventual number of cases and deaths. Without further intervention (which was then on the way), in the absence of population movements, or any other significant external event, the epidemic was The results show that intervention -modeled by allowing β to vary according to (10) , holding other parameters to the values of Table 3 -had by then managed to curb the growth rate of the outbreak, but not stop it altogether. The pool of susceptibles was estimated not to have grown over the weeks before May 9, although a small growth could not be completely excluded and was suggested by news reports, see below. The mean trajectory would eventually asymptote to an expected number of 356 total cases, with 331 deaths. This compares to the estimates (done before April 27) for 497 cases and 452 deaths, in the absence of intervention (black lines, Fig. 6 ). Intervention cut contact rates by a factor of about 40% and is estimated to have taken effect starting April 4, 2005 and taking about 12 days to be implemented (Fig. 7) . Interestingly, as the outbreak seemed to be simmering down, the next few data points indicated a dramatic re-start of the epidemic. Results up to May 9 showed that intervention was curbing the growth rate of the outbreak, but had not succeeded at stopping it altogether. The new data released by the Angolan Government and WHO on May 26 showed a dramatic reversal of that trend. Many new cases have been registered: 399 from 337 a week before, accompanied by a sharp increase in the number of deaths to 355 from 311. These numbers were statistical anomalies, lying far above the upper end of the 95% confidence for cases and deaths estimated on May 9. Thus they required a change in qualitative events on the ground. In the context of the model these new data points could only be accounted for in two very different scenarios. First, the new numbers could simply be wrong, attributing cases and deaths due to other causes to Marburg. Alternatively, the new data could indicate that a large number of The new data of May 27 was an anomaly, far exceeding the upper bound of the 95% confidence level intervals for cases and deaths, estimated up to May 9. The new best fit trajectories allow β to vary upwards and require an linear inflow of people into the susceptible class at a rate of 68 persons a day had been crossed again and that the number of new infected would subsequently grow at an accelerated pace. In fact we could estimate that the susceptible population would have to be growing then, after the very end of April, at a rate up to 68 individuals per day. This was tantamount to an epidemic restart, visible in Fig. 8 , as the average trajectories changed curvature. Needless to say, under such conditions the outlook for the development of the outbreak was rather bleak, with hundreds more cases and deaths predicted to follow. These data points were eventually revised down, after a long hiatus in reporting between June 17 and July 13, confirming the prognosis of May 9. The outbreak of Marburg hemorrhagic fever in Uige, Angola was officially declared over on November 7, 2005 by the Angolan Ministry of Health [23] , with its last laboratory confirmed case reported July 22. The outbreak claimed a total of 329 lives out of 374 identified cases, a case fatality rate of 88%. These numbers were correctly predicted on May 9, 2005, amid much uncertainty in WHO reports and in the news about the future development of the outbreak. The principal objective of the present study was to investigate the possibility of modeling in real time the spread of a new epidemic of a rare emerging disease, with very sparse data available. We have used standard outbreak reports available online from the WHO [27] and Pro-med mail [15] to construct a small data set, which we then employed to estimate epidemiological parameters and future case and death numbers with quantified uncertainty. The output of the model was used to provide guidance for the outlook of the epidemic under given qualitative scenarios and construct quantitative goals for intervention policy on the ground, creating the potential for helping optimize quantitatively severe logistical constraints. Among other quantities our approach allows for the quantitative estimate of epidemiological parameters with quantified uncertainty, and to the projection for the total number of cases and deaths at the also predicted time for the end of the outbreak. These estimates can then be used to test qualitative scenarios about the outbreak, such as the time for the occurrence of the index case, and to quantify in real time the effects of interventions and estimate population movements. This approach, much like any general epidemiological mathematical modeling [1, 4] makes certain general simplifying assumptions about the nature of the outbreak. While these may be suspect to practitioners on the ground, it has been amply demonstrated that models retain substantial predictive power, which tends to trump projections for case numbers and deaths generated by expert opinion. We believe that even if not perfect this type of "real time" epidemiological modeling is now feasible [2, 3, 5, 6, 21] and could become an essential tool useful in providing quantitative scenarios and targets for limited resource allocation on the ground. It should also be used to inform the scientific community and the public, as well as public health officials, of rational expectations and choices under unfolding new outbreaks. Infectious Diseases of Humans Real Time Bayesian Estimation of the Epidemic Potential of Emerging Infectious Diseases Towards real time epidemiology: Data assimilation, modeling and anomaly detection of health surveillance data streams Mathematical Models in Population Biology and Epidemiology Real-time estimates in early detection of SARS Estimating in real time the efficacy of measures to control emerging communicable diseases The reproductive number of ebola and the effects of public health measures: The cases of Congo and Uganda Mysterious Viruses as Bad as They Get Race against time Spatial and Syndromic Surveillance for Public Health Fruit bats as reservoirs of Ebola virus The challenge of emerging and re-emerging infectious diseases Preparing for the next pandemic Marburg and ebola virus infections in laboratory non-human primates: A literature review See Virginia Bioinformatics Institute Pathport site for general Introduction to Stochastic Search and Optimization Studies of reservoir hosts for Marburg virus Different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures Are we ready for pandemic influenza? Marburg hemorrhagic fever: Angola 2005 outbreak Origins of major human infectious diseases Host range and emerging and reemerging pathogens Emerging pathogens: the epidemiology and evolution of species jumps World Health Organization Outbreak updates